Retry Failed Python Requests in 2025
There’s no reliable Python application that doesn’t have a built-in failed HTTP request handling. You could be fetching API data, scraping websites, or interacting with web services, but unexpected failures like timeouts, connection issues, or server errors can disrupt your workflow at any time. This blog post explores strategies to manage these failures using Python’s requests library, including retry logic, best practices, and techniques like integrating proxies or custom retry mechanisms.
TL;DR
- Use retry mechanisms like "HTTPAdapter" to reattempt failed requests automatically.
- Implement retry decorators or custom logic for reusable and flexible error handling.
- Integrate proxies to avoid IP bans and rate limits, especially for web scraping.
- Follow best practices: set timeouts, handle exceptions, use sessions, and validate responses.
Python requests library
The Python requests library stands as the premier HTTP library, transforming intricate web operations into remarkably simple tasks. Its elegant syntax makes it the go-to choice for developers worldwide. It allows them to execute HTTP requests in just a few lines of code while managing complexities like URL encoding, SSL verification, and session handling behind the scenes.
At its core, requests streamlines web interactions, making it easy to retrieve data from APIs, submit forms, or scrape websites. It supports all major HTTP methods, including GET, POST, PUT, and DELETE, while providing advanced capabilities such as custom headers, authentication, and cookie management.
What truly sets requests apart is its intuitive syntax and ability to handle modern web protocols effortlessly. By abstracting the intricacies of raw HTTP, it empowers developers to focus on what matters most: building applications and solving problems effectively.
Failed Python requests
When working with Python requests, encountering failed requests is inevitable. Failed requests can stem from various reasons, including network issues, server-side problems, or incorrect configurations. Let’s review the types of errors and exceptions that requests may raise.
Requests library exceptions
The Python requests library provides a structured hierarchy of exceptions to help developers handle specific error conditions effectively. All exceptions inherit from the base class: "requests.exceptions.RequestException." This is the parent class for all requests exceptions. Catching this exception will handle any error raised by the library, but for more granular control, you can handle specific exceptions, such as:
Exception
Meaning
Solution
requests.exceptions.
ConnectionError
Raised when a connection to the server fails due to DNS resolution issues, refused connections, or network interruptions.
Check network connectivity, verify DNS settings, and ensure the server is reachable.
requests.exceptions.
ConnectTimeout
Raised when a connection attempt times out, often due to an unresponsive server.
Increase the timeout setting or verify the server’s availability.
requests.exceptions.
ReadTimeout
Triggered when the server takes too long to respond after the connection has been established.
Adjust timeout values or handle the exception with a retry mechanism.
requests.exceptions.
Timeout
A parent class for ConnectTimeout and ReadTimeout, handling any timeout-related issues.
Catch this exception and handle timeouts appropriately with retries or increased timeouts.
requests.exceptions.
HTTPError
Raised when a non-2xx status code is encountered, specifically when .raise_for_status() is called.
Check response status codes before calling .raise_for_status(), and handle 4xx/5xx errors accordingly.
requests.exceptions.
URLRequired
Raised when a valid URL is missing from the request method.
Ensure the request includes a valid URL.
requests.exceptions.
TooManyRedirects
Raised when the number of allowed redirects is exceeded, often indicating a redirect loop.
Reduce redirects or set a reasonable max redirect limit.
requests.exceptions.
InvalidHeader
Triggered when a request header is malformed or contains invalid characters.
Verify headers are correctly formatted and contain valid characters.
requests.exceptions.
ChunkedEncodingError
Raised when an error occurs during chunked transfer encoding.
Ensure the server supports chunked encoding, or retry with a different encoding method.
Common HTTP error status codes
Not all failed requests raise exceptions. Sometimes, a response with a non-2xx HTTP status code indicates failure. These errors often occur due to server responses, and common causes include:
Error
Meaning
Solution
400 Bad Request
The server cannot process the request due to invalid syntax.
Check the request format, headers, and parameters.
401 Unauthorized
Authentication is missing or incorrect.
Ensure authentication credentials are included and correct.
403 Forbidden
The server refuses to authorize the request.
Verify permissions and authentication details.
404 Not Found
The requested resource is unavailable.
Check the URL for typos or verify resource availability.
429 Too Many Requests
The client has exceeded the allowed request limit.
Implement rate-limiting strategies or use "Retry-After" headers.
500 Internal Server Error
A generic error on the server.
Retry later; if persistent, contact the server administrator.
502 Bad Gateway
The server received an invalid response from the upstream server.
Retry after some time or check the upstream service status.
503 Service Unavailable
The server is temporarily unavailable, often due to overload or maintenance.
Wait and retry; consider implementing exponential backoff.
504 Gateway Timeout
The upstream server did not respond in time.
Increase timeout settings or retry with a delay.
Python requests retry logic and strategy
Retry logic in Python requests refers to the practice of automatically reattempting failed requests to ensure successful communication with web services. Network issues, server downtime, or transient errors can occasionally cause requests to fail, and implementing retry logic minimizes the impact of these temporary disruptions. A retry strategy defines how and when to retry, balancing reliability and efficiency without overwhelming the server or degrading the user experience.
How often should you retry failed Python requests?
Deciding how often to retry depends on the nature of the application and the expected reliability of the external server. Critical operations, such as processing payments or handling sensitive data, may warrant more retry attempts than non-critical tasks. However, retries should be implemented cautiously to avoid unnecessarily burdening the server or triggering rate-limiting mechanisms.
Number of retries
The number of retry attempts should balance resilience and resource efficiency. A common practice is to limit retries to 3-5 attempts, ensuring the request has sufficient chances to succeed without indefinitely retrying. Python libraries like urllib3, used under the hood by requests, allow you to configure the number of retries through settings like "Retry(total=3)."
Request delay
Introducing a delay between retries prevents overwhelming the server with rapid, successive attempts. Delays also provide the server time to recover from transient issues, increasing the likelihood of success on subsequent retries.
Fixed or random delay
Retry delays can follow either a fixed or random pattern:
- Fixed delay. A constant delay (e.g., 1 second) is applied between each retry. While simple to implement, fixed delays might not account for server congestion or changing network conditions and are more easily identified as automated requests.
- Random delay. A randomized delay within a specified range adds randomness to retries, helping to distribute requests more evenly and avoid server overload. This approach is particularly useful when multiple clients are retrying requests simultaneously.
By carefully configuring retry logic with appropriate retry counts, delays, and strategies, you can enhance the reliability of your application while minimizing unnecessary network traffic and server strain.
Python requests retry methods
Implementing retries in Python requests can be achieved using various methods, each with its strengths and use cases. Below, we outline the most efficient retry mechanisms to handle failed requests and ensure reliability.
Python request retries with HTTPAdapter
The most commonly used retry method involves "HTTPAdapter" from the "requests.adapters" module. By integrating the "Retry" class from urllib3, you can configure retry parameters like the number of attempts, delay intervals, and which HTTP response codes to retry.
Here’s a basic implementation:
import requestsfrom requests.adapters import HTTPAdapterfrom urllib3.util.retry import Retry# Configure retry strategyretry_strategy = Retry(total = 3, # Total retriesbackoff_factor = 1, # Delay between retries (e.g., 1, 2, 4 seconds)status_forcelist = [500, 502, 503, 504], # HTTP status codes to retryallowed_methods= ["HEAD", "GET", "OPTIONS"] # Retry only for safe methods)# Attach the retry strategy to an HTTPAdapteradapter = HTTPAdapter(max_retries=retry_strategy)session = requests.Session()session.mount("https://", adapter)session.mount("http://", adapter)response = session.get("https://example.com")print(response.content)
Requests retries with decorator
Using decorators is another elegant way to implement retry logic, particularly when you want a reusable and customizable solution. Decorators wrap functions with retry logic, making the code cleaner and easier to maintain.
Here’s an example:
import requestsfrom functools import wrapsimport timedef retry_request(retries = 3, delay = 2, backoff = 2):def decorator(func):@wraps(func)def wrapper(*args, **kwargs):attempts = 0while attempts < retries:try:return func(*args, **kwargs)except requests.exceptions.RequestException:attempts += 1time.sleep(delay)delay *= backoff # Exponential backoffraise Exception("Max retries reached")return wrapperreturn decorator@retry_request(retries=3, delay=1, backoff=2)def fetch_url(url):return requests.get(url)response = fetch_url("https://example.com")print(response.content)
Custom request retries
For more control, you can implement custom retry logic tailored to specific requirements. This method is useful when standard libraries do not meet your needs or when you want to handle unique scenarios, such as retries based on custom error conditions or dynamic retry intervals.
See this example of a custom retry mechanism:
import requestsimport timedef custom_retry(url, retries = 3, delay = 2):for attempt in range(retries):try:response = requests.get(url)if response.status_code == 200:return responseexcept requests.exceptions.RequestException as e:print(f"Attempt {attempt + 1} failed: {e}")time.sleep(delay)raise Exception("All retry attempts failed")response = custom_retry("https://example.com")
Request retries with proxies
Integrating proxies into a Python requests-based environment can significantly enhance the reliability of your retry strategy, especially when dealing with web scraping or APIs that enforce rate limits and IP-based restrictions. Proxies act as intermediaries between your client and the server, masking your IP address and distributing requests across multiple IPs. This approach reduces the likelihood of getting blocked or flagged as a bot.
By using proxies, you gain greater control over your requests without relying on external systems. This is particularly useful for developers building custom solutions, as proxies allow you to manage retries and bypass restrictions while maintaining full control of the workflow. Proxies also enable geographic targeting, allowing you to send requests from specific regions if required. For high-volume, scalable systems, combining a robust retry mechanism with proxy rotation ensures your requests remain efficient and undetected.
We suggest trying our 55M+ ethically-sourced HTTP(S) & SOCKS5 residential proxies and enjoy these benefits:
- Anonymity & security. Hide your true identity with IPs coming from household devices.
- No blocks. Avoid IP bans and blocks with our advanced rotation, <0.5s response time, 99.68% success rate, and 99.99% uptime.
- Advanced targeting. Access local content in 195+ locations, including countries, cities, US states, and US ZIP codes.
- Unlimited connections & threads. Handle high-volume data requests seamlessly with unrestricted connections and parallel processing.
- Intuitive setup. Easily set up and integrate proxies with any popular browser, automation bot, or scraping tool.
- In-depth documentation. Rely on our quick start guide and extensive help material, or contact our 24/7 live tech support.
- Free trial. Claim your free trial to test our residential proxies for 3 days!
Integrating Smartproxy’s residential proxies into Python using requests and the "HTTPAdapter" retry strategy helps manage failed requests and minimize disruptions. Here's how:
import requestsfrom requests.adapters import HTTPAdapterfrom urllib3.util.retry import Retry# Proxy credentialsusername = "YOUR_USERNAME"password = "YOUR_PASSWORD"proxy_url = f"http://{username}:{password}@gate.smartproxy.com:10001"# Configure proxy settingsproxies = {"http": proxy_url,"https": proxy_url}# Configure retry strategyretry_strategy = Retry(total = 3, # Max retry attemptsbackoff_factor = 1, # Wait time multiplier (1s, 2s, 4s...)status_forcelist = [500, 502, 503, 504], # Retry only on these HTTP errorsallowed_methods = ["HEAD", "GET", "OPTIONS"] # Retry only for safe requests)# Attach the retry strategy to an HTTPAdapteradapter = HTTPAdapter(max_retries=retry_strategy)session = requests.Session()session.mount("https://", adapter)session.mount("http://", adapter)# Target URLurl = "https://ip.smartproxy.com/json"try:response = session.get(url, proxies=proxies, timeout=10)response.raise_for_status() # Raise an error for HTTP errors (4xx, 5xx)print(response.text)except requests.exceptions.RequestException as e:print(f"Request failed: {e}")
Best Python requests practices
To ensure efficient use of Python requests, keep these best practices in mind:
- Set timeouts. Always specify a timeout to prevent requests from hanging indefinitely. Use the "timeout" parameter to define a limit for connection and response times.
- Use retry logic. Implement retry mechanisms to handle transient failures, such as network errors or server downtime. Combine retries with delays to avoid overwhelming the server.
- Leverage sessions. Use "requests.Session()" to manage cookies, headers, and connection pooling efficiently. This reduces overhead and improves performance for repeated requests to the same server.
- Handle exceptions. Catch specific exceptions, such as "ConnectionError" and "Timeout," to provide meaningful error handling and prevent your application from crashing.
- Validate responses. Always check response status codes and content to ensure the data you receive is valid. Use "response.raise_for_status()" to catch HTTP errors.
- Manage headers. Set appropriate headers, like "User-Agent," to mimic real browser behavior and improve acceptance rates.
- Optimize payloads. For POST and PUT requests, ensure your payloads are properly formatted (e.g., JSON) and minimize unnecessary data to reduce server load and improve response times.
- Secure connections. Verify SSL certificates for HTTPS requests by default and avoid disabling SSL verification unless absolutely necessary.
- Integrate proxies. Use proxies to avoid IP bans and rate limits. They are especially useful for tasks like web scraping, where repeated requests to the same server can raise red flags.
To sum up
Failed HTTP requests are inevitable, but with the right tools and strategies, they won’t derail your application. From implementing robust retry logic with "HTTPAdapter" or decorators to integrating proxies for better request management, Python’s requests library provides everything you need to handle errors effectively. By combining these techniques with best practices, we hope you’ll build applications that are both resilient and efficient.
Empower your data collection projects with proxies
Try Smartproxy's industry-leading residential proxies – starting from just $2.20/GB.
About the author

Dominykas Niaura
Technical Copywriter
Dominykas brings a unique blend of philosophical insight and technical expertise to his writing. Starting his career as a film critic and music industry copywriter, he's now an expert in making complex proxy and web scraping concepts accessible to everyone.
Connect with Dominykas via LinkedIn
All information on Smartproxy Blog is provided on an as is basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Smartproxy Blog or any third-party websites that may belinked therein.