Utilities¶
Useful methods for working with http.client
, completely decoupled from
code specific to urllib3.
At the very core, just like its predecessors, urllib3
is built on top of
http.client
– the lowest level HTTP library included in the Python
standard library.
To aid the limited functionality of the http.client
module, urllib3
provides various helper methods which are used with the higher level components
but can also be used independently.
-
class
urllib3.util.
Retry
(total=10, connect=None, read=None, redirect=None, status=None, other=None, allowed_methods=<object object>, status_forcelist=None, backoff_factor=0, raise_on_redirect=True, raise_on_status=True, history=None, respect_retry_after_header=True, remove_headers_on_redirect=<object object>, method_whitelist=<object object>)¶ Bases:
object
Retry configuration.
Each retry attempt will create a new Retry object with updated values, so they can be safely reused.
Retries can be defined as a default for a pool:
retries = Retry(connect=5, read=2, redirect=5) http = PoolManager(retries=retries) response = http.request('GET', 'http://example.com/')
Or per-request (which overrides the default for the pool):
response = http.request('GET', 'http://example.com/', retries=Retry(10))
Retries can be disabled by passing
False
:response = http.request('GET', 'http://example.com/', retries=False)
Errors will be wrapped in
MaxRetryError
unless retries are disabled, in which case the causing exception will be raised.- Parameters
total (int) –
Total number of retries to allow. Takes precedence over other counts.
Set to
None
to remove this constraint and fall back on other counts.Set to
0
to fail on the first retry.Set to
False
to disable and implyraise_on_redirect=False
.connect (int) –
How many connection-related errors to retry on.
These are errors raised before the request is sent to the remote server, which we assume has not triggered the server to process the request.
Set to
0
to fail on the first retry of this type.read (int) –
How many times to retry on read errors.
These errors are raised after the request was sent to the server, so the request may have side-effects.
Set to
0
to fail on the first retry of this type.redirect (int) –
How many redirects to perform. Limit this to avoid infinite redirect loops.
A redirect is a HTTP response with a status code 301, 302, 303, 307 or 308.
Set to
0
to fail on the first retry of this type.Set to
False
to disable and implyraise_on_redirect=False
.status (int) –
How many times to retry on bad status codes.
These are retries made on responses, where status code matches
status_forcelist
.Set to
0
to fail on the first retry of this type.other (int) –
How many times to retry on other errors.
Other errors are errors that are not connect, read, redirect or status errors. These errors might be raised after the request was sent to the server, so the request might have side-effects.
Set to
0
to fail on the first retry of this type.If
total
is not set, it’s a good idea to set this to 0 to account for unexpected edge cases and avoid infinite retry loops.allowed_methods (iterable) –
Set of uppercased HTTP method verbs that we should retry on.
By default, we only retry on methods which are considered to be idempotent (multiple requests with the same parameters end with the same state). See
Retry.DEFAULT_ALLOWED_METHODS
.Set to a
False
value to retry on any verb.Warning
Previously this parameter was named
method_whitelist
, that usage is deprecated in v1.26.0 and will be removed in v2.0.status_forcelist (iterable) –
A set of integer HTTP status codes that we should force a retry on. A retry is initiated if the request method is in
allowed_methods
and the response status code is instatus_forcelist
.By default, this is disabled with
None
.backoff_factor (float) –
A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for:
{backoff factor} * (2 ** ({number of total retries} - 1))
seconds. If the backoff_factor is 0.1, then
sleep()
will sleep for [0.0s, 0.2s, 0.4s, …] between retries. It will never be longer thanRetry.BACKOFF_MAX
.By default, backoff is disabled (set to 0).
raise_on_redirect (bool) – Whether, if the number of redirects is exhausted, to raise a MaxRetryError, or to return a response with a response code in the 3xx range.
raise_on_status (bool) – Similar meaning to
raise_on_redirect
: whether we should raise an exception, or return a response, if status falls instatus_forcelist
range and retries have been exhausted.history (tuple) – The history of the request encountered during each call to
increment()
. The list is in the order the requests occurred. Each list item is of classRequestHistory
.respect_retry_after_header (bool) – Whether to respect Retry-After header on status codes defined as
Retry.RETRY_AFTER_STATUS_CODES
or not.remove_headers_on_redirect (iterable) – Sequence of headers to remove from the request when a response indicating a redirect is returned before firing off the redirected request.
-
BACKOFF_MAX
= 120¶ Maximum backoff time.
-
DEFAULT_ALLOWED_METHODS
= frozenset({'DELETE', 'GET', 'HEAD', 'OPTIONS', 'PUT', 'TRACE'})¶ Default methods to be used for
allowed_methods
-
DEFAULT_REMOVE_HEADERS_ON_REDIRECT
= frozenset({'Authorization'})¶ Default headers to be used for
remove_headers_on_redirect
-
RETRY_AFTER_STATUS_CODES
= frozenset({413, 429, 503})¶ Default status codes to be used for
status_forcelist
-
classmethod
from_int
(retries, redirect=True, default=None)¶ Backwards-compatibility for the old retries format.
-
get_retry_after
(response)¶ Get the value of Retry-After in seconds.
-
increment
(method=None, url=None, response=None, error=None, _pool=None, _stacktrace=None)¶ Return a new Retry object with incremented retry counters.
- Parameters
response (
HTTPResponse
) – A response object, or None, if the server did not return a response.error (Exception) – An error encountered during the request, or None if the response was received successfully.
- Returns
A new
Retry
object.
-
is_exhausted
()¶ Are we out of retries?
-
is_retry
(method, status_code, has_retry_after=False)¶ Is this method/status code retryable? (Based on allowlists and control variables such as the number of total retries to allow, whether to respect the Retry-After header, whether this header is present, and whether the returned status code is on the list of status codes to be retried upon on the presence of the aforementioned header)
-
sleep
(response=None)¶ Sleep between retry attempts.
This method will respect a server’s
Retry-After
response header and sleep the duration of the time requested. If that is not present, it will use an exponential backoff. By default, the backoff factor is 0 and this method will return immediately.
-
class
urllib3.util.
SSLContext
(protocol=<_SSLMethod.PROTOCOL_TLS: 2>, *args, **kwargs)¶ Bases:
_ssl._SSLContext
An SSLContext holds various SSL-related configuration options and data, such as certificates and possibly a private key.
-
sslobject_class
¶ alias of
SSLObject
-
sslsocket_class
¶ alias of
SSLSocket
-
-
class
urllib3.util.
Timeout
(total=None, connect=<object object>, read=<object object>)¶ Bases:
object
Timeout configuration.
Timeouts can be defined as a default for a pool:
timeout = Timeout(connect=2.0, read=7.0) http = PoolManager(timeout=timeout) response = http.request('GET', 'http://example.com/')
Or per-request (which overrides the default for the pool):
response = http.request('GET', 'http://example.com/', timeout=Timeout(10))
Timeouts can be disabled by setting all the parameters to
None
:no_timeout = Timeout(connect=None, read=None) response = http.request('GET', 'http://example.com/, timeout=no_timeout)
- Parameters
This combines the connect and read timeouts into one; the read timeout will be set to the time leftover from the connect attempt. In the event that both a connect timeout and a total are specified, or a read timeout and a total are specified, the shorter timeout will be applied.
Defaults to None.
connect (int, float, or None) – The maximum amount of time (in seconds) to wait for a connection attempt to a server to succeed. Omitting the parameter will default the connect timeout to the system default, probably the global default timeout in socket.py. None will set an infinite timeout for connection attempts.
The maximum amount of time (in seconds) to wait between consecutive read operations for a response from the server. Omitting the parameter will default the read timeout to the system default, probably the global default timeout in socket.py. None will set an infinite timeout.
Note
Many factors can affect the total amount of time for urllib3 to return an HTTP response.
For example, Python’s DNS resolver does not obey the timeout specified on the socket. Other factors that can affect total request time include high CPU load, high swap, the program running at a low priority level, or other behaviors.
In addition, the read and total timeouts only measure the time between read operations on the socket connecting the client and the server, not the total amount of time for the request to return a complete response. For most requests, the timeout is raised because the server has not sent the first byte in the specified time. This is not always the case; if a server streams one byte every fifteen seconds, a timeout of 20 seconds will not trigger, even though the request will take several minutes to complete.
If your goal is to cut off any request after a set amount of wall clock time, consider having a second “watcher” thread to cut off a slow request.
-
DEFAULT_TIMEOUT
= <object object>¶ A sentinel object representing the default timeout value
-
clone
()¶ Create a copy of the timeout object
Timeout properties are stored per-pool but each request needs a fresh Timeout object to ensure each one has its own start/stop configured.
- Returns
a copy of the timeout object
- Return type
-
property
connect_timeout
¶ Get the value to use when setting a connection timeout.
This will be a positive float or integer, the value None (never timeout), or the default system timeout.
- Returns
Connect timeout.
- Return type
int, float,
Timeout.DEFAULT_TIMEOUT
or None
-
classmethod
from_float
(timeout)¶ Create a new Timeout from a legacy timeout value.
The timeout value used by httplib.py sets the same timeout on the connect(), and recv() socket requests. This creates a
Timeout
object that sets the individual timeouts to thetimeout
value passed to this function.
-
get_connect_duration
()¶ Gets the time elapsed since the call to
start_connect()
.- Returns
Elapsed time in seconds.
- Return type
- Raises
urllib3.exceptions.TimeoutStateError – if you attempt to get duration for a timer that hasn’t been started.
-
property
read_timeout
¶ Get the value for the read timeout.
This assumes some time has elapsed in the connection timeout and computes the read timeout appropriately.
If self.total is set, the read timeout is dependent on the amount of time taken by the connect timeout. If the connection time has not been established, a
TimeoutStateError
will be raised.- Returns
Value to use for the read timeout.
- Return type
int, float,
Timeout.DEFAULT_TIMEOUT
or None- Raises
urllib3.exceptions.TimeoutStateError – If
start_connect()
has not yet been called on this object.
-
start_connect
()¶ Start the timeout clock, used during a connect() attempt
- Raises
urllib3.exceptions.TimeoutStateError – if you attempt to start a timer that has been started already.
-
class
urllib3.util.
Url
(scheme=None, auth=None, host=None, port=None, path=None, query=None, fragment=None)¶ Bases:
urllib3.util.url.Url
Data structure for representing an HTTP URL. Used as a return value for
parse_url()
. Both the scheme and host are normalized as they are both case-insensitive according to RFC 3986.-
property
hostname
¶ For backwards-compatibility with urlparse. We’re nice like that.
-
property
netloc
¶ Network location including host and port
-
property
request_uri
¶ Absolute path including the query string.
-
property
url
¶ Convert self into a url
This function should more or less round-trip with
parse_url()
. The returned url may not be exactly the same as the url inputted toparse_url()
, but it should be equivalent by the RFC (e.g., urls with a blank port will have : removed).Example:
>>> U = parse_url('http://google.com/mail/') >>> U.url 'http://google.com/mail/' >>> Url('http', 'username:password', 'host.com', 80, ... '/path', 'query', 'fragment').url 'http://username:password@host.com:80/path?query#fragment'
-
property
-
urllib3.util.
assert_fingerprint
(cert, fingerprint)¶ Checks if given fingerprint matches the supplied certificate.
- Parameters
cert – Certificate as bytes object.
fingerprint – Fingerprint as string of hexdigits, can be interspersed by colons.
-
urllib3.util.
current_time
()¶ monotonic() -> float
Monotonic clock, cannot go backward.
-
urllib3.util.
get_host
(url)¶ Deprecated. Use
parse_url()
instead.
-
urllib3.util.
is_connection_dropped
(conn)¶ Returns True if the connection is dropped and should be closed.
- Parameters
conn –
http.client.HTTPConnection
object.
Note: For platforms like AppEngine, this will always return
False
to let the platform handle connection recycling transparently for us.
-
urllib3.util.
is_fp_closed
(obj)¶ Checks whether a given file-like object is closed.
- Parameters
obj – The file-like object to check.
-
urllib3.util.
make_headers
(keep_alive=None, accept_encoding=None, user_agent=None, basic_auth=None, proxy_basic_auth=None, disable_cache=None)¶ Shortcuts for generating request headers.
- Parameters
keep_alive – If
True
, adds ‘connection: keep-alive’ header.accept_encoding – Can be a boolean, list, or string.
True
translates to ‘gzip,deflate’. List will get joined by comma. String will be used as provided.user_agent – String representing the user-agent you want, such as “python-urllib3/0.6”
basic_auth – Colon-separated username:password string for ‘authorization: basic …’ auth header.
proxy_basic_auth – Colon-separated username:password string for ‘proxy-authorization: basic …’ auth header.
disable_cache – If
True
, adds ‘cache-control: no-cache’ header.
Example:
>>> make_headers(keep_alive=True, user_agent="Batman/1.0") {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'} >>> make_headers(accept_encoding=True) {'accept-encoding': 'gzip,deflate'}
-
urllib3.util.
parse_url
(url)¶ Given a url, return a parsed
Url
namedtuple. Best-effort is performed to parse incomplete urls. Fields not provided will be None. This parser is RFC 3986 compliant.The parser logic and helper functions are based heavily on work done in the
rfc3986
module.Partly backwards-compatible with
urlparse
.Example:
>>> parse_url('http://google.com/mail/') Url(scheme='http', host='google.com', port=None, path='/mail/', ...) >>> parse_url('google.com:80') Url(scheme=None, host='google.com', port=80, path=None, ...) >>> parse_url('/foo?bar') Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...)
-
urllib3.util.
resolve_cert_reqs
(candidate)¶ Resolves the argument to a numeric constant, which can be passed to the wrap_socket function/method from the ssl module. Defaults to
ssl.CERT_REQUIRED
. If given a string it is assumed to be the name of the constant in thessl
module or its abbreviation. (So you can specify REQUIRED instead of CERT_REQUIRED. If it’s neither None nor a string we assume it is already the numeric constant which can directly be passed to wrap_socket.
-
urllib3.util.
resolve_ssl_version
(candidate)¶ like resolve_cert_reqs
-
urllib3.util.
split_first
(s, delims)¶ Deprecated since version 1.25.
Given a string and an iterable of delimiters, split on the first found delimiter. Return two split parts and the matched delimiter.
If not found, then the first part is the full input string.
Example:
>>> split_first('foo/bar?baz', '?/=') ('foo', 'bar?baz', '/') >>> split_first('foo/bar?baz', '123') ('foo/bar?baz', '', None)
Scales linearly with number of delims. Not ideal for large number of delims.
-
urllib3.util.
ssl_wrap_socket
(sock, keyfile=None, certfile=None, cert_reqs=None, ca_certs=None, server_hostname=None, ssl_version=None, ciphers=None, ssl_context=None, ca_cert_dir=None, key_password=None, ca_cert_data=None, tls_in_tls=False)¶ All arguments except for server_hostname, ssl_context, and ca_cert_dir have the same meaning as they do when using
ssl.wrap_socket()
.- Parameters
server_hostname – When SNI is supported, the expected hostname of the certificate
ssl_context – A pre-made
SSLContext
object. If none is provided, one will be created usingcreate_urllib3_context()
.ciphers – A string of ciphers we wish the client to support.
ca_cert_dir – A directory containing CA certificates in multiple separate files, as supported by OpenSSL’s -CApath flag or the capath argument to SSLContext.load_verify_locations().
key_password – Optional password if the keyfile is encrypted.
ca_cert_data – Optional string containing CA certificates in PEM format suitable for passing as the cadata parameter to SSLContext.load_verify_locations()
tls_in_tls – Use SSLTransport to wrap the existing socket.
-
urllib3.util.
wait_for_read
(sock, timeout=None)¶ Waits for reading to be available on a given socket. Returns True if the socket is readable, or False if the timeout expired.
-
urllib3.util.
wait_for_write
(sock, timeout=None)¶ Waits for writing to be available on a given socket. Returns True if the socket is readable, or False if the timeout expired.