Back to blog

Mastering Python Requests: A Comprehensive Guide to Using Proxies

When using Python's Requests library, proxies can help with tasks like web scraping, interacting with APIs, or accessing geo-restricted content. Proxies route HTTP requests through different IP addresses, helping you avoid IP bans, maintain anonymity, and bypass restrictions. This guide covers how to set up and use proxies with the Requests library. Let’s get started!

Zilvinas Tamulis

Feb 29, 2024

12 min read

Mastering Python Requests - Hero

The importance of Python Requests

Requests is a library that enables Python developers to communicate with the web through HTTP requests. It provides an intuitive and elegant API that simplifies the process of making GET and POST requests and other methods of communication with a web server. It’s a popular Python library with over 300 million monthly downloads and 50K+ stars on GitHub, making it one of the most reliable and trusted tools out there. Requests solidifies itself in the developer community as a popular library for several reasons:

  • Simple and easy to use. If you were to write requests manually, you’d end up with large bits of code that are hard to read, maintain, and understand. Requests simplify the process, reducing the code you must write for complex tasks.
  • Vast amount of features. The Python Requests module has many features that simplify the developer's life – a consistent interface, session and cookie persistence, built-in authentication support, and content parsing to data structures such as JSON. These are only a few of the things that Requests offers as a library, and there are no limits on what it can do – it’s extensible for more advanced use cases and scenarios, so you can be sure you’ll never run into a dead end.
  • Proxy support and integration. Finally, Requests features proxy support, allowing you to integrate Smartproxy proxies easily. A proxy enhances security by making requests appear from different locations and IP addresses. This is incredibly useful for tasks such as web scraping or automation tools that might run into risks of being rate-limited or IP-banned by certain websites if too many requests are made from the same device.

Getting started with Python Requests

To get started with the Python Requests library, make sure you have the latest version of Python installed on your system. Next, open your terminal tool and run the following command to install the Requests package:

pip install requests

If you encounter an "error: externally-managed-environment" message, refer to our troubleshooting guide for a quick solution. Once installed, import Requests in your Python code with this line:

import requests

A simple example

The requests library makes it easy to send HTTP requests with just a few lines of code. Let’s start with a basic example – fetching data from a website:

import requests
response = requests.get("https://ip.smartproxy.com/json")
if response.status_code == 200:
print(response.json()) # Print the response in JSON format
else:
print("Request failed with status code:", response.status_code)

That's the beauty of Requests – just a few lines of code to make a request, retrieve data, and display it.

Key features

The Requests library is versatile and supports a range of functionalities for different HTTP operations. Here are some key features:

1. Handling GET and POST requests. Requests make it easy to perform both GET and POST requests. Here's a simple GET request to retrieve data from servers easily:

import requests
url = "https://httpbin.org/get"
response = requests.get(url)
if response.status_code == 200:
data = response.json()
print(data)
else:
print("GET request failed with status code:", response.status_code)

POST Requests, on the other hand, allow you to submit data to servers for processing:

import requests
# Define endpoint and payload
url = "https://httpbin.org/post"
payload = {"username": "testing", "password": "admin123"}
# Execute POST request
response = requests.post(url, data=payload)
# Process response
if response.status_code == 200:
data = response.json()
print(data)
else:
print("POST request failed with status code:", response.status_code)

2. Session management. Sessions allow you to maintain state across multiple requests by preserving cookies and headers. Here's an example that shows how to set up and use a session:

import requests
# Initialize session object
session = requests.Session()
# Configure session parameters
session.headers.update({'User-Agent': 'python-requests-example/1.0'})
# Execute request with session
url = "https://httpbin.org/headers"
response = session.get(url)
# Validate response
if response.status_code == 200:
data = response.json()
print(data)
else:
print("Session request failed with status code:", response.status_code)

3. Timeout management. Setting timeouts ensures your program doesn’t hang indefinitely if a server is slow to respond.

import requests
url = "https://httpbin.org/get"
# Set a timeout of 5 seconds
response = requests.get(url, timeout=5)
if response.status_code == 200:
print(response.json())
else:
print("Request failed with status code:", response.status_code)

Running your script

To execute these examples, save your code in a Python file and run the following command in the terminal:

python file_name.py

Integrating proxies with Python Requests

Proxies act as intermediaries between your application and target websites by routing your web requests through a proxy server while masking your original IP address. They help maintain privacy, bypass geographical restrictions, and avoid detection by anti-bot systems. There are several proxy types available, including residential, datacenter, and mobile. Residential proxies are generally the most reliable because they use IP addresses associated with real residential devices.

Let's integrate Smartproxy proxies into your code. Many websites employ robust anti-bot measures, and using Smartproxy proxies can help ensure your automated requests succeed.

To begin, head over to the Smartproxy dashboard. From there, select a proxy type of your choice. You can choose between residential, ISP, and datacenter proxies. We’ll use residential proxies for this example:

  1. Find residential proxies by navigating to Residential under the Residential proxies column on the left panel, and purchase a plan that best suits your needs.
  2. Open the Proxy setup tab.
  3. Head over to the Endpoint generator.
  4. Configure the proxy parameters according to your needs. Set the authentication method, location, session type, and protocol.
  5. Click on Code examples.
  6. Further below in the code window, select Python on the left.
  7. Copy the code.

These steps will provide you with an example code snippet that conveniently also uses the Requests library:

# The code might appear slightly different from this one according to the parameters you've set
import requests
url = 'https://ip.smartproxy.com/json'
username = 'user'
password = 'pass'
proxy = f"http://{username}:{password}@gate.smartproxy.com:10001"
result = requests.get(url, proxies = {
'http': proxy,
'https': proxy
})
print(result.text)

This code makes a simple request to https://ip.smartproxy.com/json, which returns information about your location and IP address in JSON format. You can be sure that the proxies work because they’ll show a different location and IP address than your current one.

POST requests

Similar to GET requests, you can also send POST requests. A POST request is a method to submit data to be processed to a specified resource on a server. In contrast to GET requests that retrieve data, POST requests typically involve sending data in the request body, often used for tasks like form submissions or uploading files.

Here’s an example code to send a POST request to a target website with data:

import requests
url = 'https://httpbin.org/post'
username = 'user'
password = 'pass'
proxy = f"http://{username}:{password}@gate.smartproxy.com:10000"
# Data for the POST request
data = {'name': 'value1', 'key2': 'value2'}
result = requests.post(url, data=data, proxies={'http': proxy, 'https': proxy})
# Print the response content
print(result.text)

This script sends a POST request to a sample website httpbin, which returns a response upon success:

{
"args": {},
"data": "",
"files": {},
"form": {
"key1": "value1",
"key2": "value2"
},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Content-Length": "23",
"Content-Type": "application/x-www-form-urlencoded",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.31.0"
},
"json": null,
"origin": "xxx.xxx.xx.xxx",
"url": "https://httpbin.org/post"
}

The data you sent appears under “form”, indicating that the data was passed to the server. You can also check the IP address next to “origin” to ensure that this request was also made from an IP address different from your own. 

Basic authentication

Sometimes, to communicate with a server, you’ll need to provide credentials to connect to it. Here’s a basic authentication example that will make a request while also sending the username and password to the server:

import requests
url = 'https://ip.smartproxy.com/json'
proxy_username = 'proxy_user'
proxy_password = 'proxy_pass'
auth_username = 'your_username'
auth_password = 'your_password'
proxy = f"http://{proxy_username}:{proxy_password}@gate.smartproxy.com:10000"
result = requests.get(url, proxies = {
'http': proxy,
'https': proxy
})
general_auth_credentials = requests.auth.HTTPBasicAuth(auth_username, auth_password)
result = requests.get(url, proxies={
'http': proxy,
'https': proxy,
'auth': general_auth_credentials
})
print(result.text)

Status codes

When you make any kind of an HTTP request, you’ll run into a status code that tells you the status of your request and whether it was successful or not. To learn more about what the status code you’ve received means, check out our comprehensive documentation that lists the explanations of the most common responses.

Advanced usage

In addition to the basic proxy integration, there are several advanced techniques for using proxies with the Requests library.

Session management with proxies

When making multiple requests through a proxy server, using a Session object can be highly useful. A session reuses the same TCP connection and retains settings, cookies, headers, and authentication data between requests. This is particularly useful when interacting with websites that require login credentials, as it allows you to remain authenticated without re-sending credentials on every request.

Below is an example of how to set up and use a proxy session in Python Requests:

import requests
import time
HN_USER = 'HN_USERNAME'
HN_PASS = 'HN_PASSWORD'
session = requests.Session()
# Set the proxies in the session object
username = 'PROXY_USERNAME'
password = 'PROXY_PASSWORD'
session.proxies = {
'http': f'http://{username}:{password}@gate.smartproxy.com:10001',
'https': f'http://{username}:{password}@gate.smartproxy.com:10001',
}
# Initially, check the authentication status
response = session.get('https://news.ycombinator.com/submit')
print("User authenticated: {}".format("logged in" in response.text))
# Perform login using the session
session.post('https://news.ycombinator.com/login', { 'goto': 'news', 'acct': HN_USER, 'pw': HN_PASS })
# Wait a few seconds to allow login to complete
time.sleep(5)
# Check the authentication status again
response = session.get('https://news.ycombinator.com/submit')
print("User authenticated: {}".format("logged in" in response.text))

In this code, the first request to the submission page indicates that the user is not logged in. After posting the login credentials, the session retains the authentication state, and a subsequent request shows that the user is now authenticated.

Rotating proxies with Python requests

Rotating proxies help distribute your requests across multiple IP addresses, which is useful for avoiding IP bans, rate limits, or geographic restrictions. The following code shows a simple rotation mechanism with error handling:

import requests
import time
# List of proxies to test
proxies_list = [
"http://username:password@gate.smartproxy.com:10001",
"http://username:password@gate.smartproxy.com:10002",
"http://username:password@gate.smartproxy.com:10003",
]
def test_proxies():
url = "https://ip.smartproxy.com/json"
for proxy in proxies_list:
try:
proxies = {'http': proxy, 'https': proxy}
response = requests.get(url, proxies=proxies, timeout=5)
if response.status_code == 200:
data = response.json()
ip = data.get('proxy', {}).get('ip', 'Unknown')
print(f"IP: {ip}")
else:
print(f"Failed with status code: {response.status_code}")
except Exception as e:
print(f"Error: {str(e)}")
time.sleep(1) # Brief pause between requests
if __name__ == "__main__":
test_proxies()

When you run this script several times, you'll notice that each request uses a different IP address, showing how IP rotation works.

Handling proxy errors

By default, Requests verifies SSL certificates on HTTPS requests. However, when working with proxies, certificate verification can sometimes lead to SSLError errors. To avoid these errors, you can disable SSL verification by setting verify=False:

import requests
proxies = {
'http': 'http://username:password@gate.smartproxy.com:10001',
'https': 'http://username:password@gate.smartproxy.com:10001',
}
response = requests.get(
'https://ip.smartproxy.com/json',
proxies=proxies,
verify=False # Disable SSL certificate verification
)
print(response.text)

Use cases

Python Requests can be applied for many uses, either independently or with other tools or libraries. Here are some cool examples:

  • API requests – many web applications offer intuitive API interfaces to interact with them. If you’re going to build an application of your own that relies on the information and content another one provides, accessing their API and interacting with them is extremely important. For example, you're building a Python app that tells users about the latest music trends, the most popular songs this week, etc. You’ll need to access the APIs of many streaming platforms and retrieve data about how often songs were played, artist and track names, album covers, etc. Python Requests helps make API calls by providing an intuitive interface for sending HTTP requests, allowing developers to easily access and interact with web APIs to fetch the necessary data.
  • Handling responses – when you make an HTTP request to a web server, it will always provide one of many responses. This will be an HTTP response status code, a way for the server to tell the status of your HTTP request, whether it was successful, failed, or something happened in between. If your code interacts with a web service often, it will likely run into one of these HTTP errors eventually, especially if it makes multiple requests from the same IP address and gets rate-limited or blocked. It’s crucial to identify when these errors happen and modify your code to react accordingly to the response type it receives. Python Requests simplifies handling responses with built-in methods to check status codes and parse returned data so developers can manage errors and update their requests as needed.
  • Web scraping – one of the most common reasons to write scripts that search the web for you is for data collection, otherwise known as scraping. It’s a process of making an HTTP request to a web page, extracting all the data from it, and then parsing it or saving it for analysis later. Information like this is later used for various research purposes, market intelligence, competitor analysis, price comparison, and many other reasons. Python Requests assists with web scraping by providing an easy way to retrieve webpage content via HTTP requests, which can be integrated with libraries like Beautiful Soup to extract and organize data for further analysis. Learn more about Python web scraping.

Proxy performance and security benefits 

Smartproxy proxies are essential to using the Requests library as they enhance its functionality. Proxies enhance speed and reliability and prevent your code from running into common issues when running scripts on the web.

The most important thing to remember is that a regular user using a browser to check websites will usually not run into any issues doing so. Some of their actions may raise suspicions, but they will most likely get resolved quickly, and no danger flags will be raised. Meanwhile, an automated script is a wild, untamed beast that doesn’t follow any standard procedures of a regular user, and websites will quickly recognize it, try to stop it, and put it back in a cage.

While it’s completely understandable that websites implement measures against potentially harmful bots, not all scripts are malicious; yet they face the same limitations and consequences. 

Proxy services come in handy when trying to circumvent these limitations, as they allow you to make requests from different IP addresses while maintaining your anonymity. This way, websites don’t know if the requests come from the same or multiple sources, making limiting or blocking harder.

Smartproxy offers a wide range of proxy solutions to use together with Requests. You can buy proxies, choosing from various proxy types and a massive pool of 65M+ IPs across 195+ locations worldwide, with unlimited connections, threads, and fast response times. These features make your automated script actions on the web more secure and more challenging to detect with no cost to performance or having to make massive changes to your code or infrastructure.

Final thoughts

Python Requests remains one of the most popular choices for many web-based applications, and it doesn’t look like that will change any time soon. It’s a crucial tool at your disposal and will be a base for many of your web scraping scripts. The best part about it is that it’s super simple to get started, and when paired with Smartproxy proxies, it becomes an unstoppable force that will easily overcome any difficulties.

About the author

Zilvinas Tamulis

Technical Copywriter

A technical writer with over 4 years of experience, Žilvinas blends his studies in Multimedia & Computer Design with practical expertise in creating user manuals, guides, and technical documentation. His work includes developing web projects used by hundreds daily, drawing from hands-on experience with JavaScript, PHP, and Python.


Connect with Žilvinas via LinkedIn

All information on Smartproxy Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Smartproxy Blog or any third-party websites that may belinked therein.

Frequently asked questions

How do you make an HTTP GET request?

To make a basic GET request, you can use Python’s Requests library HTTP request method – requests.get(). Provide the URL as an argument, and the response object will contain information like status code, request headers, and content.

What parameters are used in a POST request?

How can I handle errors and exceptions when making HTTP requests?

What is the Python Requests library?

How do I install the Python Requests library?

How to use a proxy in Python Requests?

How can I test if my proxy setup is working correctly?

You can test your proxy setup by making a request to a service that returns network details, such as your IP address and location (e.g., https://ip.smartproxy.com/json). If the returned data matches that of your proxy, your configuration will work correctly. If it shows your actual IP, double-check your proxy settings and credentials.

© 2018-2025 smartproxy.com, All Rights Reserved