How to Handle Timeouts in Python Requests
ArticleLearn how to handle timeouts in Python requests properly, including connect vs read timeouts, retries, streaming edge cases, and production best practices.
Network calls fail.
Not because the code is wrong, but because the real world is messy. DNS stalls. TLS handshakes hang. A server accepts a connection and never sends a byte back.
If a Python service calls an external API without proper timeouts, it can freeze a worker thread and destroy throughput.
This is not theoretical. A single missing timeout can exhaust a connection pool under load and turn a minor upstream slowdown into a production incident.
Here’s how to handle timeouts in requests properly — with trade-offs, edge cases, and production realities in mind.
The Default Is Dangerous
The requests library does not set a timeout by default.
import requests
response = requests.get("https://api.example.com/data")
That call can hang forever.
Why This Matters
- A slow upstream can block a worker thread indefinitely
- In WSGI apps, blocked threads reduce concurrency
- In async task workers, blocked calls delay queue processing
- Under load, connection pools fill up and new requests stall This is how minor latency spikes cascade into outages. Always set a timeout. No exceptions.
Basic Timeout Usage
The simplest fix:
response = requests.get(
"https://api.example.com/data",
timeout=5
)
That 5 means:
Wait up to 5 seconds total for the server to respond. If exceeded,
requestsraises:
requests.exceptions.Timeout
Better than hanging — but still too blunt for many production systems.
Split Connect and Read Timeouts
The timeout argument accepts a tuple:
response = requests.get(
"https://api.example.com/data",
timeout=(3, 10)
)
This means:
- 3 seconds to establish the TCP connection
- 10 seconds to receive data after connection
Why Split Them?
Connection timeouts usually indicate:
- DNS issues
- Network routing problems
- Host unreachable
- Firewall blocks
Read timeouts usually indicate:
- Server overload
- Slow backend processing
- Streaming endpoint stalling
Different failure modes carry different operational meaning. Set them intentionally.
Catching Timeout Exceptions Properly
Don’t catch broad exceptions unless you want to hide real issues.
Correct approach:
import requests
try:
response = requests.get(
"https://api.example.com/data",
timeout=(3, 10)
)
response.raise_for_status()
except requests.exceptions.ConnectTimeout:
handle_connect_timeout()
except requests.exceptions.ReadTimeout:
handle_read_timeout()
except requests.exceptions.Timeout:
handle_generic_timeout()
Why This Matters
- Connect timeout → maybe fallback to secondary region
- Read timeout → maybe retry
- DNS failure → likely configuration issue Treating them the same loses signal.
Retries: Use With Restraint
Timeouts often deserve retries. Blind retries amplify load.
❌ Bad Pattern
response = requests.get(
"https://api.example.com/data",
timeout=(3, 10)
)
Repeated blindly = retry storm.
Better Approach: urllib3 Retry via HTTPAdapter
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
import requests
session = requests.Session()
retry = Retry(
total=3,
backoff_factor=0.5,
status_forcelist=[500, 502, 503, 504],
allowed_methods=["GET", "HEAD"]
)
adapter = HTTPAdapter(max_retries=retry)
session.mount("https://", adapter)
response = session.get(
"https://api.example.com/data",
timeout=(3, 10)
)
What This Gives You
- Exponential backoff
- Retry only safe HTTP methods
- Avoid retrying on every exception
Still not magic. If the upstream is melting, retries can worsen the situation. Cap them.
Timeouts in High-Concurrency Systems
In low-traffic scripts, a 30-second timeout might be fine. In production APIs? Reckless.
Example Scenario
- Gunicorn with 4 workers
- Each worker handles 20 concurrent threads
- One upstream stalls for 60 seconds
Now 80 threads are blocked. Requests pile up. Latency spikes. Eventually 502s everywhere. Shorter timeouts improve resilience by:
- Forcing early failure
- Freeing resources quickly
- Preventing thread starvation
Typical Production Defaults
- Connect timeout: 1–3 seconds
- Read timeout: 3–10 seconds
Adjust based on SLA and payload size.
Streaming Responses: Hidden Trap
When using stream=True, read timeouts behave differently.
response = requests.get(
url,
timeout=(3, 10),
stream=True
)
for chunk in response.iter_content(chunk_size=8192):
process(chunk)
The read timeout applies per socket read, not total download time. If the server sends 1 byte every 9 seconds, this may never timeout. To enforce total duration:
import time
start = time.time()
for chunk in response.iter_content(chunk_size=8192):
if time.time() - start > 30:
raise TimeoutError("Download exceeded 30 seconds")
process(chunk)
If total duration matters, enforce it explicitly.
Session-Level Defaults
Sprinkling timeout= everywhere invites mistakes. Someone will forget it during a refactor. Encapsulate it.
class TimeoutSession(requests.Session):
def request(self, *args, **kwargs):
kwargs.setdefault("timeout", (3, 10))
return super().request(*args, **kwargs)
session = TimeoutSession()
response = session.get("https://api.example.com/data")
Now every request has a sane default unless explicitly overridden.
This pattern prevents subtle regressions and keeps timeout policy centralized. It also reinforces a broader principle in Python systems design: make safe defaults automatic. If you're structuring request parameters dynamically — for example, injecting timeout values or retry flags into request payloads — understanding how to manipulate dictionaries cleanly becomes important. If you need a refresher, here’s a practical guide on how to add items to a dictionary in Python, including patterns that help keep configuration code maintainable. Centralizing defaults reduces human error. Production systems fail at the margins — not in the happy path.
Testing Timeout Behavior
Most systems test happy paths. Few test latency. You should.
Tools to Simulate Slow Endpoints
responseshttpretty- Local proxy that delays responses
- Integration tests with artificial sleep
If timeout handling isn’t tested, it will fail at the worst possible moment.
Conclusion
Design your timeout strategy deliberately:
- Always set a timeout
- Split connect and read timeouts
- Be explicit about retries
- Keep timeouts short in high-concurrency systems
- Test failure paths
- Combine with circuit breakers for real resilience
Timeouts are not just about avoiding hangs. They define how your system behaves under stress. When upstream services slow down, your timeout policy determines whether your app degrades gracefully — or collapses.
Find more insights here
What Is a Search Engine Rankings API and How It Powers Modern SEO
Learn what a Search Engine Rankings API is, how it works, key features, real use cases, and how it p...
How to Scrape Google Shopping: A Complete Guide to E-commerce Data Extraction
Google Shopping is one of the largest product discovery platforms online. It aggregates product list...
Scrape Without Interruptions: How to Integrate MrScraper With IPRoyal Proxies
Learn what makes IPRoyal a great option for MrScraper. Follow our step-by-step proxy integration gui...