In this tutorial, you will learn how to get a URL in Python using the popular library requests. This library is widely used for fetching data from URLs, handling APIs, and web scraping. We will go through the steps on how to install and use the requests library to get URL content in Python.
Step 1: Install requests library
Before you get started, ensure that you have the latest version of Python installed on your computer. You can check your Python version by running the command:
1 |
python --version |
If you have Python installed, you can proceed to install the requests library. You can install the library using the pip command:
1 |
pip install requests |
Step 2: Import requests library
Now that the requests library is installed, you should import it into your Python script:
1 |
import requests |
Step 3: Fetching URL content – Get request
To fetch the content of a URL using the requests library, you need to use the get()
method. This method takes the URL as its argument and returns the response:
1 |
response = requests.get("") |
Make sure to replace with the desired URL you want to fetch.
Step 4: Accessing response content and status
After making a GET request, you can access the content of the response using the text
property and the status code using the status_code
property. Here’s an example:
1 2 3 4 5 6 7 8 |
url = "https://www.example.com" response = requests.get(url) print("URL content:") print(response.text) print("\nStatus code:") print(response.status_code) |
In this example, we are fetching the content of “https://www.example.com” and printing its content and status code.
Full Code:
1 2 3 4 5 6 7 8 9 10 |
import requests url = "https://www.example.com" response = requests.get(url) print("URL content:") print(response.text) print("\nStatus code:") print(response.status_code) |
Output:
The output is url content and status code.
Status code: 200
The output shows the content of the URL (HTML code in this case) and the status code of the request (200, meaning successful).
Step 5: Handling errors and exceptions
Sometimes, you might encounter errors like invalid URLs or network issues while fetching a URL in Python. To handle these errors, you should use exception handling using the try
and except
blocks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
import requests url = "https://www.example.com" try: response = requests.get(url) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"An error occurred: {e}") else: print("URL content:") print(response.text) print("\nStatus code:") print(response.status_code) |
With these steps, you should be able to fetch URLs in Python using the requests library. Now you can move forward with making API requests, web scraping, or any other web-related tasks.
Conclusion
In this tutorial, you have learned how to get a URL in Python using the requests library. You have installed and imported the library, fetched a URL, accessed its content and status, and learned how to handle errors and exceptions. With this knowledge, you can now proceed to make API requests, web scraping, and dynamic web-based applications.