User Guide#
Installing#
urllib3 can be installed with pip
$ python -m pip install urllib3
Making Requests#
First things first, import the urllib3 module:
import urllib3
You’ll need a PoolManager
instance to make requests.
This object handles all of the details of connection pooling and thread safety
so that you don’t have to:
http = urllib3.PoolManager()
To make a request use request()
:
import urllib3
# Creating a PoolManager instance for sending requests.
http = urllib3.PoolManager()
# Sending a GET request and getting back response as HTTPResponse object.
resp = http.request("GET", "https://httpbin.org/robots.txt")
# Print the returned data.
print(resp.data)
# b"User-agent: *\nDisallow: /deny\n"
request()
returns a HTTPResponse
object, the
Response Content section explains how to handle various responses.
You can use request()
to make requests using any
HTTP verb:
import urllib3
http = urllib3.PoolManager()
resp = http.request(
"POST",
"https://httpbin.org/post",
fields={"hello": "world"} # Add custom form fields
)
print(resp.data)
# b"{\n "form": {\n "hello": "world"\n }, ... }
The Request Data section covers sending other kinds of requests data, including JSON, files, and binary data.
Note
For quick scripts and experiments you can also use a top-level urllib3.request()
.
It uses a module-global PoolManager
instance.
Because of that, its side effects could be shared across dependencies relying on it.
To avoid side effects, create a new PoolManager
instance and use it instead.
In addition, the method does not accept the low-level **urlopen_kw
keyword arguments.
System CA certificates are loaded on default.
Response Content#
The HTTPResponse
object provides
status
, data
, and
headers
attributes:
import urllib3
# Making the request (The request function returns HTTPResponse object)
resp = urllib3.request("GET", "https://httpbin.org/ip")
print(resp.status)
# 200
print(resp.data)
# b"{\n "origin": "104.232.115.37"\n}\n"
print(resp.headers)
# HTTPHeaderDict({"Content-Length": "32", ...})
JSON Content#
JSON content can be loaded by json()
method of the response:
import urllib3
resp = urllib3.request("GET", "https://httpbin.org/ip")
print(resp.json())
# {"origin": "127.0.0.1"}
Alternatively, Custom JSON libraries such as orjson can be used to encode data,
retrieve data by decoding and deserializing the data
attribute of the request:
import orjson
import urllib3
encoded_data = orjson.dumps({"attribute": "value"})
resp = urllib3.request(method="POST", url="http://httpbin.org/post", body=encoded_data)
print(orjson.loads(resp.data)["json"])
# {'attribute': 'value'}
Binary Content#
The data
attribute of the response is always set
to a byte string representing the response content:
import urllib3
resp = urllib3.request("GET", "https://httpbin.org/bytes/8")
print(resp.data)
# b"\xaa\xa5H?\x95\xe9\x9b\x11"
Note
For larger responses, it’s sometimes better to stream the response.
Using io Wrappers with Response Content#
Sometimes you want to use io.TextIOWrapper
or similar objects like a CSV reader
directly with HTTPResponse
data. Making these two interfaces play nice
together requires using the auto_close
attribute by setting it
to False
. By default HTTP responses are closed after reading all bytes, this disables that behavior:
import io
import urllib3
resp = urllib3.request("GET", "https://example.com", preload_content=False)
resp.auto_close = False
for line in io.TextIOWrapper(resp):
print(line)
# <!doctype html>
# <html>
# <head>
# ....
# </body>
# </html>
Request Data#
Headers#
You can specify headers as a dictionary in the headers
argument in request()
:
import urllib3
resp = urllib3.request(
"GET",
"https://httpbin.org/headers",
headers={
"X-Something": "value"
}
)
print(resp.json()["headers"])
# {"X-Something": "value", ...}
Or you can use the HTTPHeaderDict
class to create multi-valued HTTP headers:
import urllib3
# Create an HTTPHeaderDict and add headers
headers = urllib3.HTTPHeaderDict()
headers.add("Accept", "application/json")
headers.add("Accept", "text/plain")
# Make the request using the headers
resp = urllib3.request(
"GET",
"https://httpbin.org/headers",
headers=headers
)
print(resp.json()["headers"])
# {"Accept": "application/json, text/plain", ...}
Query Parameters#
For GET
, HEAD
, and DELETE
requests, you can simply pass the
arguments as a dictionary in the fields
argument to
request()
:
import urllib3
resp = urllib3.request(
"GET",
"https://httpbin.org/get",
fields={"arg": "value"}
)
print(resp.json()["args"])
# {"arg": "value"}
For POST
and PUT
requests, you need to manually encode query parameters
in the URL:
from urllib.parse import urlencode
import urllib3
# Encode the args into url grammar.
encoded_args = urlencode({"arg": "value"})
# Create a URL with args encoded.
url = "https://httpbin.org/post?" + encoded_args
resp = urllib3.request("POST", url)
print(resp.json()["args"])
# {"arg": "value"}
Form Data#
For PUT
and POST
requests, urllib3 will automatically form-encode the
dictionary in the fields
argument provided to
request()
:
import urllib3
resp = urllib3.request(
"POST",
"https://httpbin.org/post",
fields={"field": "value"}
)
print(resp.json()["form"])
# {"field": "value"}
JSON#
You can send a JSON request by specifying the data as json
argument,
urllib3 automatically encodes data using json
module with UTF-8
encoding. Also by default "Content-Type"
in headers is set to
"application/json"
if not specified when calling
request()
:
import urllib3
data = {"attribute": "value"}
resp = urllib3.request(
"POST",
"https://httpbin.org/post",
body=data,
headers={"Content-Type": "application/json"}
)
print(resp.json())
# {"attribute": "value"}
Files & Binary Data#
For uploading files using multipart/form-data
encoding you can use the same
approach as Form Data and specify the file field as a tuple of
(file_name, file_data)
:
import urllib3
# Reading the text file from local storage.
with open("example.txt") as fp:
file_data = fp.read()
# Sending the request.
resp = urllib3.request(
"POST",
"https://httpbin.org/post",
fields={
"filefield": ("example.txt", file_data),
}
)
print(resp.json()["files"])
# {"filefield": "..."}
While specifying the filename is not strictly required, it’s recommended in order to match browser behavior. You can also pass a third item in the tuple to specify the file’s MIME type explicitly:
resp = urllib3.request(
"POST",
"https://httpbin.org/post",
fields={
"filefield": ("example.txt", file_data, "text/plain"),
}
)
For sending raw binary data simply specify the body
argument. It’s also
recommended to set the Content-Type
header:
import urllib3
with open("/home/samad/example.jpg", "rb") as fp:
binary_data = fp.read()
resp = urllib3.request(
"POST",
"https://httpbin.org/post",
body=binary_data,
headers={"Content-Type": "image/jpeg"}
)
print(resp.json()["data"])
# data:application/octet-stream;base64,...
Certificate Verification#
Note
New in version 1.25:
HTTPS connections are now verified by default (cert_reqs = "CERT_REQUIRED"
).
While you can disable certification verification by setting cert_reqs = "CERT_NONE"
, it is highly recommend to leave it on.
Unless otherwise specified urllib3 will try to load the default system certificate stores. The most reliable cross-platform method is to use the certifi package which provides Mozilla’s root certificate bundle:
$ python -m pip install certifi
Once you have certificates, you can create a PoolManager
that verifies certificates when making requests:
import certifi
import urllib3
http = urllib3.PoolManager(
cert_reqs="CERT_REQUIRED",
ca_certs=certifi.where()
)
The PoolManager
will automatically handle certificate
verification and will raise SSLError
if verification fails:
import certifi
import urllib3
http = urllib3.PoolManager(
cert_reqs="CERT_REQUIRED",
ca_certs=certifi.where()
)
http.request("GET", "https://httpbin.org/")
# (No exception)
http.request("GET", "https://expired.badssl.com")
# urllib3.exceptions.SSLError ...
Note
You can use OS-provided certificates if desired. Just specify the full
path to the certificate bundle as the ca_certs
argument instead of
certifi.where()
. For example, most Linux systems store the certificates
at /etc/ssl/certs/ca-certificates.crt
. Other operating systems can
be difficult.
Using Timeouts#
Timeouts allow you to control how long (in seconds) requests are allowed to run
before being aborted. In simple cases, you can specify a timeout as a float
to request()
:
import urllib3
resp = urllib3.request(
"GET",
"https://httpbin.org/delay/3",
timeout=4.0
)
print(type(resp))
# <class "urllib3.response.HTTPResponse">
# This request will take more time to process than timeout.
urllib3.request(
"GET",
"https://httpbin.org/delay/3",
timeout=2.5
)
# MaxRetryError caused by ReadTimeoutError
For more granular control you can use a Timeout
instance which lets you specify separate connect and read timeouts:
import urllib3
resp = urllib3.request(
"GET",
"https://httpbin.org/delay/3",
timeout=urllib3.Timeout(connect=1.0)
)
print(type(resp))
# <urllib3.response.HTTPResponse>
urllib3.request(
"GET",
"https://httpbin.org/delay/3",
timeout=urllib3.Timeout(connect=1.0, read=2.0)
)
# MaxRetryError caused by ReadTimeoutError
If you want all requests to be subject to the same timeout, you can specify
the timeout at the PoolManager
level:
import urllib3
http = urllib3.PoolManager(timeout=3.0)
http = urllib3.PoolManager(
timeout=urllib3.Timeout(connect=1.0, read=2.0)
)
You still override this pool-level timeout by specifying timeout
to
request()
.
Retrying Requests#
urllib3 can automatically retry idempotent requests. This same mechanism also
handles redirects. You can control the retries using the retries
parameter
to request()
. By default, urllib3 will retry
requests 3 times and follow up to 3 redirects.
To change the number of retries just specify an integer:
import urllib3
urllib3.request("GET", "https://httpbin.org/ip", retries=10)
To disable all retry and redirect logic specify retries=False
:
import urllib3
urllib3.request(
"GET",
"https://nxdomain.example.com",
retries=False
)
# NewConnectionError
resp = urllib3.request(
"GET",
"https://httpbin.org/redirect/1",
retries=False
)
print(resp.status)
# 302
To disable redirects but keep the retrying logic, specify redirect=False
:
resp = urllib3.request(
"GET",
"https://httpbin.org/redirect/1",
redirect=False
)
print(resp.status)
# 302
For more granular control you can use a Retry
instance.
This class allows you far greater control of how requests are retried.
For example, to do a total of 3 retries, but limit to only 2 redirects:
urllib3.request(
"GET",
"https://httpbin.org/redirect/3",
retries=urllib3.Retry(3, redirect=2)
)
# MaxRetryError
You can also disable exceptions for too many redirects and just return the
302
response:
resp = urllib3.request(
"GET",
"https://httpbin.org/redirect/3",
retries=urllib3.Retry(
redirect=2,
raise_on_redirect=False
)
)
print(resp.status)
# 302
If you want all requests to be subject to the same retry policy, you can
specify the retry at the PoolManager
level:
import urllib3
http = urllib3.PoolManager(retries=False)
http = urllib3.PoolManager(
retries=urllib3.Retry(5, redirect=2)
)
You still override this pool-level retry policy by specifying retries
to
request()
.
Errors & Exceptions#
urllib3 wraps lower-level exceptions, for example:
import urllib3
try:
urllib3.request("GET","https://nx.example.com", retries=False)
except urllib3.exceptions.NewConnectionError:
print("Connection failed.")
# Connection failed.
See exceptions
for the full list of all exceptions.
Logging#
If you are using the standard library logging
module urllib3 will
emit several logs. In some cases this can be undesirable. You can use the
standard logger interface to change the log level for urllib3’s logger:
logging.getLogger("urllib3").setLevel(logging.WARNING)