Dataset Viewer
repo_name
stringclasses 1
value | topic
stringclasses 1
value | issue_number
int64 30
6.44k
| title
stringlengths 8
145
| body
stringlengths 0
7.01k
| state
stringclasses 1
value | created_at
stringdate 2011-05-13 13:31:24
2023-05-04 22:27:39
| updated_at
stringdate 2014-09-10 18:37:56
2024-05-21 10:16:30
| url
stringlengths 41
43
| labels
stringlengths 21
3.05k
| user_login
stringlengths 4
16
| comments_count
int64 25
211
| solution_body
stringlengths 21
3.05k
| solution_author
stringlengths 4
16
| solution_created_at
stringdate 2011-05-13 13:33:05
2023-05-04 22:32:18
| solution_url
stringlengths 62
67
| solution_reactions
int64 0
34
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
psf/requests | http | 2,966 | Consider using system trust stores by default in 3.0.0. | It's been raised repeatedly, mostly by people using Linux systems, that it's annoying that requests doesn't use the system trust store and instead uses the one that certifi ships. This is an understandable position. I have some personal attachment to the certifi approach, but the other side of that argument definitely has a reasonable position too. For this reason, I'd like us to look into whether we should use the system trust store by default, and make certifi's bundle a fallback option.
I have some caveats here:
1. If we move to the system trust store, we must do so on _all_ platforms: Linux must not be its own special snowflake.
2. We must have broad-based support for Linux and Windows.
3. We must be able to fall back to certifi cleanly.
Right now it seems like the best route to achieving this would be to use [certitude](https://github.com/python-hyper/certitude). This currently has support for dynamically generating the cert bundle OpenSSL needs directly from the system keychain on OS X. If we added Linux and Windows support to that library, we may have the opportunity to switch to using certitude.
Given @kennethreitz's bundling policy, we probably cannot unconditionally switch to certitude, because certitude depends on cryptography (at least on OS X). However, certitude could take the current privileged position that certifi takes, or be a higher priority than certifi, as an optional dependency that is used if present on the system.
Thoughts? This is currently a RFC, so please comment if you have opinions. /cc @sigmavirus24 @alex @kennethreitz @dstufft @glyph @reaperhulk @morganfainberg
| closed | 2016-01-11T17:15:47Z | 2024-05-19T18:35:48Z | https://github.com/psf/requests/issues/2966 | I think the system trust stores (or not) essentially boils down to whether you want requests to act the same across platforms, or whether you want it to act in line with the platform it is running on. I do not think that _either_ of these options are wrong (or right), just different trade offs.
I think that it's not as simple on Windows as it is on Linux or OSX (although @tiran might have a better idea). I _think_ that Windows doesn't ship with all of the certificates available and you have to do something (use WinHTTP?) to get it to download any additional certificates on demand. I think that means that a brand new Windows install, if you attempt to dump the certificate store will be missing a great many certificates.
On Linux, you still have the problem that there isn't one single set location for the certificate files, the best you can do is try to heuristically guess at where it might be. This gets better on Python 2.7.9+ and Python 3.4+ since you can use `ssl.get_default_verify_paths()` to get what the default paths are, but you can't rely on that unless you drop 2.6, 2.7.8, and 3.3. In pip we attempt to discover the location of the system trust store (just by looping over some common file locations) and if we can't find it we fall back to certifi, and one problem that has come up is that sometimes we'll find a location, but it's an old outdated copy that isn't being updated by anything. People then get really confused because it works in their browser, it works with requests, but it doesn't in pip.
I assume the fall back to certifi ensures that things will still work correctly on platforms that either don't ship certificates at all, or don't ship them by default and they aren't installed? If so, that's another possible niggle here that you'd want to think about. Some platforms, like say FreeBSD, don't ship them by default at all. So it's possible that people will have a requests using thing running just fine without the FreeBSD certificates installed, and they then install them (explicitly or implicitly) and suddenly they trust something different and the behavior of the program changes.
Anyways, the desire seems reasonable to me and, if all of the little niggles get worked out, it really just comes down to a question of if requests wants to fall on the side of "fitting in" with a particular platform, or if it wants to prefer cross platform uniformity.
| Lukasa | 211 | I think the system trust stores (or not) essentially boils down to whether you want requests to act the same across platforms, or whether you want it to act in line with the platform it is running on. I do not think that _either_ of these options are wrong (or right), just different trade offs.
I think that it's not as simple on Windows as it is on Linux or OSX (although @tiran might have a better idea). I _think_ that Windows doesn't ship with all of the certificates available and you have to do something (use WinHTTP?) to get it to download any additional certificates on demand. I think that means that a brand new Windows install, if you attempt to dump the certificate store will be missing a great many certificates.
On Linux, you still have the problem that there isn't one single set location for the certificate files, the best you can do is try to heuristically guess at where it might be. This gets better on Python 2.7.9+ and Python 3.4+ since you can use `ssl.get_default_verify_paths()` to get what the default paths are, but you can't rely on that unless you drop 2.6, 2.7.8, and 3.3. In pip we attempt to discover the location of the system trust store (just by looping over some common file locations) and if we can't find it we fall back to certifi, and one problem that has come up is that sometimes we'll find a location, but it's an old outdated copy that isn't being updated by anything. People then get really confused because it works in their browser, it works with requests, but it doesn't in pip.
I assume the fall back to certifi ensures that things will still work correctly on platforms that either don't ship certificates at all, or don't ship them by default and they aren't installed? If so, that's another possible niggle here that you'd want to think about. Some platforms, like say FreeBSD, don't ship them by default at all. So it's possible that people will have a requests using thing running just fine without the FreeBSD certificates installed, and they then install them (explicitly or implicitly) and suddenly they trust something different and the behavior of the program changes.
Anyways, the desire seems reasonable to me and, if all of the little niggles get worked out, it really just comes down to a question of if requests wants to fall on the side of "fitting in" with a particular platform, or if it wants to prefer cross platform uniformity.
| dstufft | 2016-01-11T17:37:49Z | https://github.com/psf/requests/issues/2966#issuecomment-170628425 | 0 |
psf/requests | http | 1,573 | Specify password for SSL client side certificate | As far as I know currently it's not possible to specify the password for the client side certificate you're using for authentication.
This is a bit of a problem because you typically always want to password protect your .pem file which contains the private key. `openssl` won't even let you create one without a password.
| closed | 2013-09-03T13:14:44Z | 2021-11-28T05:40:38Z | https://github.com/psf/requests/issues/1573 | Something like:
`requests.get('https://kennethreitz.com', cert='server.pem', cert_pw='my_password')`
| botondus | 122 | Something like:
`requests.get('https://kennethreitz.com', cert='server.pem', cert_pw='my_password')`
| botondus | 2013-09-03T13:16:56Z | https://github.com/psf/requests/issues/1573#issuecomment-23711419 | 0 |
psf/requests | http | 2,455 | CERTIFICATE_VERIFY_FAILED with 2.5.2, works with 2.5.1 | I'm having a somewhat odd issue. A get() request seems to work fine with requests-2.5.1, but after upgrading to requests 2.5.2, the same URL leads to `CERTIFICATE_VERIFY_FAILED`. cc @mpharrigan
```
$ pip install requests==2.5.1
[ ... snip, installs just fine ...]
$ python -c 'import requests; print(requests.get("http://conda.binstar.org/omnia/linux-64/fftw3f-3.3.3-1.tar.bz2"))'
<Response [200]>
$ pip install requests==2.5.2
[ ... snip, installs just fine ...]
$ python -c 'import requests; print(requests.get("http://conda.binstar.org/omnia/linux-64/fftw3f-3.3.3-1.tar.bz2"))'
Traceback (most recent call last):
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 341, in _make_request
self._validate_conn(conn)
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 762, in _validate_conn
conn.connect()
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 238, in connect
ssl_version=resolved_ssl_version)
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/site-packages/requests/packages/urllib3/util/ssl_.py", line 256, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/ssl.py", line 364, in wrap_socket
_context=self)
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/ssl.py", line 578, in __init__
self.do_handshake()
File "/Users/rmcgibbo/miniconda/envs/3.4.2/lib/python3.4/ssl.py", line 805, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:600)
During handling of the above exception, another exception occurred:
[... snip ...]
```
| closed | 2015-02-24T01:11:01Z | 2021-09-08T20:00:52Z | https://github.com/psf/requests/issues/2455 | FWIW, this is with python 3.4.2 and OS X 10.10.
| rmcgibbo | 111 | FWIW, this is with python 3.4.2 and OS X 10.10.
| rmcgibbo | 2015-02-24T01:11:19Z | https://github.com/psf/requests/issues/2455#issuecomment-75679526 | 0 |
psf/requests | http | 4,006 | requests 2.14.0 cannot be installed using pip < 8.1.2 | Example with pip 6.1.1 (but same with pip 8.1.1):
```console
> pip install requests
You are using pip version 6.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting requests
Using cached requests-2.14.0-py2.py3-none-any.whl
Exception:
Traceback (most recent call last):
File "D:\VEnvs\testpip\lib\site-packages\pip\basecommand.py", line 246, in main
status = self.run(options, args)
File "D:\VEnvs\testpip\lib\site-packages\pip\commands\install.py", line 342, in run
requirement_set.prepare_files(finder)
File "D:\VEnvs\testpip\lib\site-packages\pip\req\req_set.py", line 345, in prepare_files
functools.partial(self._prepare_file, finder))
File "D:\VEnvs\testpip\lib\site-packages\pip\req\req_set.py", line 290, in _walk_req_to_install
more_reqs = handler(req_to_install)
File "D:\VEnvs\testpip\lib\site-packages\pip\req\req_set.py", line 557, in _prepare_file
set(req_to_install.extras) - set(dist.extras)
File "D:\VEnvs\testpip\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2758, in extras
return [dep for dep in self._dep_map if dep]
File "D:\VEnvs\testpip\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2781, in _dep_map
self.__dep_map = self._compute_dependencies()
File "D:\VEnvs\testpip\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2814, in _compute_dependencies
common = frozenset(reqs_for_extra(None))
File "D:\VEnvs\testpip\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 2811, in reqs_for_extra
if req.marker_fn(override={'extra':extra}):
File "D:\VEnvs\testpip\lib\site-packages\pip\_vendor\_markerlib\markers.py", line 113, in marker_fn
return eval(compiled_marker, environment)
File "<environment marker>", line 1, in <module>
NameError: name 'platform_system' is not defined
```
So, installing requests 2.14.0 implicitly requires pip >= 9.x. Should be at least in the release note as a disclaimer, or be fixed if it's not on purpose. | closed | 2017-05-09T16:19:18Z | 2021-09-08T08:00:45Z | https://github.com/psf/requests/issues/4006 | Ok, so right now with 56% of our downloaders *already* safe and most of the rest one `pip install -U pip` away from being ok, I'm pretty tempted to say that we should consider toughing this out. It's going to annoy a whole bunch of people, but if we can drag the ecosystem forward a bit to the point that people can actually meaningfully use environment markers for conditional dependencies that would be one hell of a public service. | lmazuel | 91 | Ok, so right now with 56% of our downloaders *already* safe and most of the rest one `pip install -U pip` away from being ok, I'm pretty tempted to say that we should consider toughing this out. It's going to annoy a whole bunch of people, but if we can drag the ecosystem forward a bit to the point that people can actually meaningfully use environment markers for conditional dependencies that would be one hell of a public service. | Lukasa | 2017-05-09T16:57:36Z | https://github.com/psf/requests/issues/4006#issuecomment-300230879 | 34 |
psf/requests | http | 239 | Too many open files | I am building a basic load generator and started running into file descriptor limits, I havent seen any documentation pertaining to how to release resources, so either I am doing it wrong and the docs need updated, or requests is leaking file descriptors somewhere (without support for keepalive I am slightly confused at why any files would be left open at all)
| closed | 2011-11-05T12:37:30Z | 2021-09-08T01:21:03Z | https://github.com/psf/requests/issues/239 | Where you using `requests.async`?
| daleharvey | 81 | Where you using `requests.async`?
| kennethreitz | 2011-11-05T14:27:04Z | https://github.com/psf/requests/issues/239#issuecomment-2640638 | 0 |
psf/requests | http | 1,155 | Add MultiDict Support | [Comment by Kenneth Reitz](https://github.com/kennethreitz/requests/pull/1103#issuecomment-12370830):
> We need to support MultiDict. This is long overdue.
| closed | 2013-01-31T18:02:52Z | 2021-09-08T17:05:21Z | https://github.com/psf/requests/issues/1155 | Should the values be ordered or not? I have no preference, I'm just wondering if others do.
| dmckeone | 79 | Should the values be ordered or not? I have no preference, I'm just wondering if others do.
| sigmavirus24 | 2013-02-15T21:34:40Z | https://github.com/psf/requests/issues/1155#issuecomment-13629318 | 0 |
psf/requests | http | 3,006 | requests.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:645) | Here is the first issue.
https://github.com/kennethreitz/requests/issues/2906
Python 3.5.1 (https://www.python.org/downloads/) Virtualenv 14.0.5 Mac OS X 10.11.3
First, I created a virtualenv and `pip install requests[security]`
Then I got
```
>>> from cryptography.hazmat.backends.openssl.backend import backend
>>> print(backend.openssl_version_text())
OpenSSL 1.0.2f 28 Jan 2016
```
which was what I expected.
Everything worked great for about an hour.
Then, some of my own scripts crashed which was normal. After that, when I try to run my script again, I got following exceptions
```
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:645)
requests.packages.urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:645)
requests.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:645)
```
So I opened another Python console and
```
>>> requests.get("https://www.google.com")
<Response [200]>
>>> requests.get("https://www.telegram.org")
Traceback (most recent call last):
File "VirtualenvPath/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 559, in urlopen
body=body, headers=headers)
File "VirtualenvPath/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 345, in _make_request
self._validate_conn(conn)
File "VirtualenvPath/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 784, in _validate_conn
conn.connect()
File "VirtualenvPath/lib/python3.5/site-packages/requests/packages/urllib3/connection.py", line 252, in connect
ssl_version=resolved_ssl_version)
File "VirtualenvPath/lib/python3.5/site-packages/requests/packages/urllib3/util/ssl_.py", line 305, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 376, in wrap_socket
_context=self)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 747, in __init__
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 983, in do_handshake
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 628, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:645)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "VirtualenvPath/lib/python3.5/site-packages/requests/adapters.py", line 376, in send
timeout=timeout
File "VirtualenvPath/lib/python3.5/site-packages/requests/packages/urllib3/connectionpool.py", line 588, in urlopen
raise SSLError(e)
requests.packages.urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:645)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "VirtualenvPath/lib/python3.5/site-packages/requests/api.py", line 67, in get
return request('get', url, params=params, **kwargs)
File "VirtualenvPath/lib/python3.5/site-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "VirtualenvPath/lib/python3.5/site-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "VirtualenvPath/lib/python3.5/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "VirtualenvPath/lib/python3.5/site-packages/requests/adapters.py", line 447, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:645)
>>>
```
So I rebooted, uninstalled all these libs and pip install them again. Everything worked again.
But after 1 hour or so, same exception again.
| closed | 2016-02-12T03:30:56Z | 2021-09-04T00:06:18Z | https://github.com/psf/requests/issues/3006 | > Then, some of my own scripts crashed which was normal.
What does a crash like that look like?
> After that, when I try to run my script again, I got following exceptions
Are you completely unable to run them at all after that?
| caizixian | 77 | > Then, some of my own scripts crashed which was normal.
What does a crash like that look like?
> After that, when I try to run my script again, I got following exceptions
Are you completely unable to run them at all after that?
| sigmavirus24 | 2016-02-12T04:23:45Z | https://github.com/psf/requests/issues/3006#issuecomment-183174403 | 0 |
psf/requests | http | 2,022 | https GET request fails with "handshake failure" | Related to #1083, perhaps. Standard `requests.get()` for this particular site/page `https://docs.apitools.com/2014/04/24/a-small-router-for-openresty.html` results in:
```
>>> import requests
>>> requests.get('https://docs.apitools.com/2014/04/24/a-small-router-for-openresty.html')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/adapters.py", line 385, in send
raise SSLError(e)
requests.exceptions.SSLError: [Errno 1] _ssl.c:504: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
```
Using `request-toolbelt`'s `SSLAdapter` to try various ssl versions, they all fail, it would seem... see following tracebacks.
TLSv1:
```
>>> adapter = SSLAdapter('TLSv1')
>>> s = requests.Session()
>>> s.mount('https://', adapter)
>>> s.get('https://docs.apitools.com/2014/04/24/a-small-router-for-openresty.html')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 395, in get
return self.request('GET', url, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/adapters.py", line 385, in send
raise SSLError(e)
requests.exceptions.SSLError: [Errno 1] _ssl.c:504: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
```
SSLv3:
```
>>> adapter = SSLAdapter('SSLv3')
>>> s = requests.Session()
>>> s.mount('https://', adapter)
>>> s.get('https://docs.apitools.com/2014/04/24/a-small-router-for-openresty.html')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 395, in get
return self.request('GET', url, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/adapters.py", line 385, in send
raise SSLError(e)
requests.exceptions.SSLError: [Errno 1] _ssl.c:504: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
```
SSLv2:
```
>>> adapter = SSLAdapter('SSLv2')
>>> s = requests.Session()
>>> s.mount('https://', adapter)
>>> s.get('https://docs.apitools.com/2014/04/24/a-small-router-for-openresty.html')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 395, in get
return self.request('GET', url, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/adapters.py", line 378, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='docs.apitools.com', port=443): Max retries exceeded with url: /2014/04/24/a-small-router-for-openresty.html (Caused by <class 'socket.error'>: [Errno 54] Connection reset by peer)
```
Note the last one gives a `Connection reset by peer` error, which differs from the others, but I'm pretty sure SSLv2 isn't supported by the server anyhow.
For fun, I tried to pass through some more appropriate headers through on the last request as well:
```
>>> headers = {
... 'Accept': u"text/html,application/xhtml+xml,application/xml",
... 'User-Agent': u"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36",
... 'Accept-Encoding': u"gzip,deflate",
... 'Accept-Language': u"en-US,en;q=0.8"
... }
>>> adapter = SSLAdapter('SSLv2')
>>> s = requests.Session()
>>> s.mount('https://', adapter)
>>> s.get('https://docs.apitools.com/2014/04/24/a-small-router-for-openresty.html', headers=headers)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 395, in get
return self.request('GET', url, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/Users/jaddison/.virtualenvs/techtown/lib/python2.7/site-packages/requests/adapters.py", line 378, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='docs.apitools.com', port=443): Max retries exceeded with url: /2014/04/24/a-small-router-for-openresty.html (Caused by <class 'socket.error'>: [Errno 54] Connection reset by peer)
```
No dice there either. Here's what the HTTPS connection info in Chrome on Mac looks like:

I'm not positive, but some googling indicates it's likely a cipher list issue, which is more urllib3, I think?
I tried to modify `DEFAULT_CIPHER_LIST` in `pyopenssl`, but started running into import errors. At this point it seemed like things were just broken, and there wasn't really a proper way to approach fixing this yet.
Version information:
OSX Mavericks
Python 2.7.5
OpenSSL 0.9.8y 5 Feb 2013 - (from `python -c "import ssl; print ssl.OPENSSL_VERSION"`)
requests 2.2.1
requests-toolbelt 0.2.0
urllib3 1.8
| closed | 2014-04-26T17:42:19Z | 2018-06-15T02:16:34Z | https://github.com/psf/requests/issues/2022 | Just want to chime in and say that I experienced this issue on OS X 10.9.5, Python 2.7.7 and OpenSSL 0.9.8zc.
I was able to fix my handshaking issue by:
1. Installing a newer-than-stock OpenSSL on my machine via `brew install OpenSSL`
2. Compiling and installing the `cryptography` package linked against the new OpenSSL (`env ARCHFLAGS="-arch x86_64" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" pip install cryptography`)
3. Installing requests with SNI support by doing `pip install requests[security]`
| jaddison | 76 | Just want to chime in and say that I experienced this issue on OS X 10.9.5, Python 2.7.7 and OpenSSL 0.9.8zc.
I was able to fix my handshaking issue by:
1. Installing a newer-than-stock OpenSSL on my machine via `brew install OpenSSL`
2. Compiling and installing the `cryptography` package linked against the new OpenSSL (`env ARCHFLAGS="-arch x86_64" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" pip install cryptography`)
3. Installing requests with SNI support by doing `pip install requests[security]`
| Microserf | 2015-02-23T08:45:47Z | https://github.com/psf/requests/issues/2022#issuecomment-75506581 | 6 |
psf/requests | http | 1,289 | Python 3.3 requests issue | I'm getting a strange error when using Requests in Python 3.3 (other flavors of Python 3 do not get this error):
```
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 421, in urlopen
body=body, headers=headers)
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 273, in _make_request
conn.request(method, url, **httplib_request_kw)
File "C:\Python33\lib\http\client.py", line 1049, in request
self._send_request(method, url, body, headers)
File "C:\Python33\lib\http\client.py", line 1087, in _send_request
self.endheaders(body)
File "C:\Python33\lib\http\client.py", line 1045, in endheaders
self._send_output(message_body)
File "C:\Python33\lib\http\client.py", line 890, in _send_output
self.send(msg)
File "C:\Python33\lib\http\client.py", line 828, in send
self.connect()
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 104, in connect
ssl_version=resolved_ssl_version)
File "C:\Python33\lib\site-packages\requests\packages\urllib3\util.py", line 329, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Python33\lib\ssl.py", line 210, in wrap_socket
_context=self)
File "C:\Python33\lib\ssl.py", line 310, in __init__
raise x
File "C:\Python33\lib\ssl.py", line 306, in __init__
self.do_handshake()
File "C:\Python33\lib\ssl.py", line 513, in do_handshake
self._sslobj.do_handshake()
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\requests\adapters.py", line 211, in send
timeout=timeout
File "C:\Python33\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 465, in urlopen
raise MaxRetryError(self, url, e)
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='analysiscenter.veracode.com', port=443): Max retries exceeded with url: /api/2.0/getappbuilds.do (Caused by <class 'ConnectionResetError'>: [WinError 10054] An existing connection was forcibly closed by the remote host)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\tfs\Vertafore_TFSDev\CQ\python\veracode\pythontestscriptonation.py", line 33, in <module>
print(v.get_app_builds())
File "C:\tfs\Vertafore_TFSDev\CQ\python\veracode\apiwrapper.py", line 184, in get_app_builds
{}
File "C:\tfs\Vertafore_TFSDev\CQ\python\veracode\apiwrapper.py", line 57, in request
r = requests.get(URL, params=data, auth=username_password)
File "C:\Python33\lib\site-packages\requests\api.py", line 55, in get
return request('get', url, **kwargs)
File "C:\Python33\lib\site-packages\requests\api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 354, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python33\lib\site-packages\requests\sessions.py", line 460, in send
r = adapter.send(request, **kwargs)
File "C:\Python33\lib\site-packages\requests\adapters.py", line 246, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='analysiscenter.veracode.com', port=443): Max retries exceeded with url: /api/2.0/getappbuilds.do (Caused by <class 'ConnectionResetError'>: [WinError 10054] An existing connection was forcibly closed by the remote host)
[Finished in 61.0s with exit code 1]
```
This automation has run for months on Python 3.2 and these errors have never occurred. I don't really know enough about requests to investigate this issue, but I'd be happy to help recreate the issue or debug if someone else can. Perhaps there's a bug in how Requests is handling HTTPS requests with Python 3.3? (Did 3.3 change how urllib works?...I don't know offhand...)
Again, I'm not getting any of these issues in Python 3.2 or Python 3.1. Please help! :)
| closed | 2013-04-04T15:34:17Z | 2021-09-08T18:00:45Z | https://github.com/psf/requests/issues/1289 | Please reply with the following: `python -c 'import requests; print(requests.__version__)'`
| echohack | 67 | Please reply with the following: `python -c 'import requests; print(requests.__version__)'`
| sigmavirus24 | 2013-04-04T16:37:13Z | https://github.com/psf/requests/issues/1289#issuecomment-15908534 | 0 |
psf/requests | http | 1,659 | Update SSL Certs | We really need to automate this and have an official workflow and policy for security releases.
Fixes #1655
### TODO:
- [x] Automate CA Bundle generation
- [ ] Establish documented security policy
### See Also:
- [PEP 453 Security Considerations](http://www.python.org/dev/peps/pep-0453/#security-considerations)
- [kennethreitz/cert-playground](https://github.com/kennethreitz/cert-playground)
| closed | 2013-10-08T22:14:49Z | 2021-09-09T00:34:27Z | https://github.com/psf/requests/issues/1659 | Is the right thing to do here just to rewrite cURL's Perl script that converts the Mozilla text file into a .pem file? I took a look and it shouldn't be too awful to do.
| kennethreitz | 64 | Is the right thing to do here just to rewrite cURL's Perl script that converts the Mozilla text file into a .pem file? I took a look and it shouldn't be too awful to do.
| Lukasa | 2013-10-11T11:03:28Z | https://github.com/psf/requests/issues/1659#issuecomment-26129301 | 0 |
psf/requests | http | 3,070 | Consider making Timeout option required or have a default | I have a feeling I'm about to get a swift education on this topic, but I've been thinking about the pros/cons of changing `requests` so that somehow there is a timeout value configured for every request.
I think there are two ways to do this:
1. Provide a default value. I know browsers have a default, so that may be a simple place to begin.
2. Make every user configure this in every request -- bigger API breakage. Probably not the way to go.
The reason I'm thinking about this is because I've used requests for a few years now and until now I didn't realize the importance of providing a timeout. It took one of my programs hanging forever for me to realize that the default here isn't really very good for my purposes. (I'm in the process of updating all my code...)
I also see that a lot of people want `Session` objects to have a timeout parameter, and this might be a way to do that as well.
If a large default were provided to all requests and all sessions, what negative impact would that have? The only thing I can think of is that some programs will get timeout exceptions where they previously hung, which seems like an improvement to me.
## Caveat, added May 13, 2016:
Please don't use this issue to discuss adding a timeout attribute to requests. There are a number of discussions about this elsewhere (search closed issues), and we don't want that conversation to muddy this issue too. Thanks.
| closed | 2016-03-29T16:56:50Z | 2024-05-21T10:16:30Z | https://github.com/psf/requests/issues/3070 | Hooray for reopening this bug. I like the solution of providing a default with an env override. Seems simple enough to me.
> You really shouldn't be hitting the internet without proper timeouts in place ever in production. Any sane engineer knows that — it's not Requests' job to do your job for you.
I do disagree with this though. The thing I love about Requests (and Python in general) is that these kinds of details are usually taken care of for me. For me, I think that's what so jarring about this issue. A super simple GET request has a gotcha that could bite you way later.
Anyway, thanks for reopening. I really appreciate it and I think the community will greatly benefit from a default timeout. | mlissner | 62 | Hooray for reopening this bug. I like the solution of providing a default with an env override. Seems simple enough to me.
> You really shouldn't be hitting the internet without proper timeouts in place ever in production. Any sane engineer knows that — it's not Requests' job to do your job for you.
I do disagree with this though. The thing I love about Requests (and Python in general) is that these kinds of details are usually taken care of for me. For me, I think that's what so jarring about this issue. A super simple GET request has a gotcha that could bite you way later.
Anyway, thanks for reopening. I really appreciate it and I think the community will greatly benefit from a default timeout. | mlissner | 2017-07-31T21:30:21Z | https://github.com/psf/requests/issues/3070#issuecomment-319202345 | 18 |
psf/requests | http | 520 | requests leaves a hanging connection for each request | This was already noted in #458, but deserves an extra ticket: When using the simple API (via requests.head/get/post etc.), the requests module leaves a connection open until it is garbage collected. This eats up resources on the server site and it really impolite.
How to reproduce:
Run this script (can somebody please tell me how one can embed a gist in markdown? The help page shows only the result of embedding...):
https://gist.github.com/2234274
Result on my system:
<pre>
torsten@sharokan:~/workspace/loco2-git$ python demo.py
Open sockets after 20 head requests:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 13928 torsten 3u IPv4 616968 0t0 TCP sharokan.fritz.box:50072->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 4u IPv4 616972 0t0 TCP sharokan.fritz.box:50073->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 5u IPv4 616976 0t0 TCP sharokan.fritz.box:50074->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 6u IPv4 616980 0t0 TCP sharokan.fritz.box:50075->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 7u IPv4 616984 0t0 TCP sharokan.fritz.box:50076->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 8u IPv4 616988 0t0 TCP sharokan.fritz.box:50077->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 9u IPv4 616992 0t0 TCP sharokan.fritz.box:50078->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 10u IPv4 616996 0t0 TCP sharokan.fritz.box:50079->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 11u IPv4 617000 0t0 TCP sharokan.fritz.box:50080->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 12u IPv4 617004 0t0 TCP sharokan.fritz.box:50081->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 13u IPv4 617008 0t0 TCP sharokan.fritz.box:50082->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 14u IPv4 617012 0t0 TCP sharokan.fritz.box:50083->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 15u IPv4 617016 0t0 TCP sharokan.fritz.box:50084->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 16u IPv4 617020 0t0 TCP sharokan.fritz.box:50085->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 17u IPv4 617024 0t0 TCP sharokan.fritz.box:50086->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 18u IPv4 617028 0t0 TCP sharokan.fritz.box:50087->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 19u IPv4 617032 0t0 TCP sharokan.fritz.box:50088->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 20u IPv4 617036 0t0 TCP sharokan.fritz.box:50089->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 21u IPv4 617040 0t0 TCP sharokan.fritz.box:50090->bk-in-f94.1e100.net:www (ESTABLISHED)
python 13928 torsten 22u IPv4 617044 0t0 TCP sharokan.fritz.box:50091->bk-in-f94.1e100.net:www (ESTABLISHED)
Garbage collection result: 1700
Open sockets after garbage collection:
</pre>
I think the sockets should be closed immediately when the session is unrefed. FYI, I found this when testing a local webservice by running it in CherryPy which defaults to 10 threads - after 10 requests, the web server was blocked.
| closed | 2012-03-29T07:19:00Z | 2021-09-03T00:10:52Z | https://github.com/psf/requests/issues/520 | Digging further, this seems due to a reference loop and is really a problem inherited from httplib: The HTTPResponse keeps a reference to the file object as _fileobject. I would argue that that reference should be dropped after the complete response was read (it should not be possible to read from the socket of a pooled connection using some old response).
| Bluehorn | 62 | Digging further, this seems due to a reference loop and is really a problem inherited from httplib: The HTTPResponse keeps a reference to the file object as _fileobject. I would argue that that reference should be dropped after the complete response was read (it should not be possible to read from the socket of a pooled connection using some old response).
| Bluehorn | 2012-03-29T08:00:51Z | https://github.com/psf/requests/issues/520#issuecomment-4798576 | 0 |
psf/requests | http | 2,422 | Broken pipe Exception | I post a large file to server, and it raise `ConnectionError` exception.
```
Traceback (most recent call last):
File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/rq/worker.py", line 479, in perform_job
rv = job.perform()
File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/rq/job.py", line 466, in perform
self._result = self.func(*self.args, **self.kwargs)
File "./jobs/office.py", line 97, in notice_file
open('test.pdf', 'rb')))
File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/enterprise/client/api/media.py", line 15, in upload
'media': media_file
File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/api/base.py", line 20, in _post
return self._client._post(url, **kwargs)
File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/base.py", line 74, in _post
**kwargs
File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/base.py", line 39, in _request
**kwargs
File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/api.py", line 49, in request
response = session.request(method=method, url=url, **kwargs)
File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/sessions.py", line 461, in request
resp = self.send(prep, **send_kwargs)
File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', error(32, 'Broken pipe'))
```
The `test.pdf` is 4.2M, when I post it as a file, I get ConnectionError Exception.
But I use curl to post it to server, it can be uploaded successful.
| closed | 2015-01-23T14:03:14Z | 2016-11-30T09:12:06Z | https://github.com/psf/requests/issues/2422 | Can you reproduce it every time? What curl command are you using? Does the
server have any useful log messages?
##
Kevin Burke
phone: 925.271.7005 | twentymilliseconds.com
On Fri, Jan 23, 2015 at 6:03 AM, Cloverstd [email protected] wrote:
> I post a large file to server, and it raise ConnectionError exception.
>
> Traceback (most recent call last):
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/rq/worker.py", line 479, in perform_job
> rv = job.perform()
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/rq/job.py", line 466, in perform
> self._result = self.func(_self.args, *_self.kwargs)
> File "./jobs/office.py", line 97, in notice_file
> open('test.pdf', 'rb')))
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/enterprise/client/api/media.py", line 15, in upload
> 'media': media_file
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/api/base.py", line 20, in _post
> return self._client._post(url, *_kwargs)
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/base.py", line 74, in _post
> *_kwargs
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/base.py", line 39, in _request
> *_kwargs
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/api.py", line 49, in request
> response = session.request(method=method, url=url, *_kwargs)
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/sessions.py", line 461, in request
> resp = self.send(prep, *_send_kwargs)
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/sessions.py", line 573, in send
> r = adapter.send(request, *_kwargs)
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/adapters.py", line 415, in send
> raise ConnectionError(err, request=request)
> ConnectionError: ('Connection aborted.', error(32, 'Broken pipe'))
>
> The test.pdf is 4.2M, when I post it as a file, I get ConnectionError
> Exception.
> But I use curl to post it to server, it can be uploaded successful.
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/kennethreitz/requests/issues/2422.
| cloverstd | 61 | Can you reproduce it every time? What curl command are you using? Does the
server have any useful log messages?
##
Kevin Burke
phone: 925.271.7005 | twentymilliseconds.com
On Fri, Jan 23, 2015 at 6:03 AM, Cloverstd [email protected] wrote:
> I post a large file to server, and it raise ConnectionError exception.
>
> Traceback (most recent call last):
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/rq/worker.py", line 479, in perform_job
> rv = job.perform()
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/rq/job.py", line 466, in perform
> self._result = self.func(_self.args, *_self.kwargs)
> File "./jobs/office.py", line 97, in notice_file
> open('test.pdf', 'rb')))
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/enterprise/client/api/media.py", line 15, in upload
> 'media': media_file
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/api/base.py", line 20, in _post
> return self._client._post(url, *_kwargs)
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/base.py", line 74, in _post
> *_kwargs
> File "/Users/cloverstd/Dropbox/WorkSpace/gench/enterprise/wechatpy/client/base.py", line 39, in _request
> *_kwargs
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/api.py", line 49, in request
> response = session.request(method=method, url=url, *_kwargs)
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/sessions.py", line 461, in request
> resp = self.send(prep, *_send_kwargs)
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/sessions.py", line 573, in send
> r = adapter.send(request, *_kwargs)
> File "/Users/cloverstd/.virtualenvs/enterprise/lib/python2.7/site-packages/requests/adapters.py", line 415, in send
> raise ConnectionError(err, request=request)
> ConnectionError: ('Connection aborted.', error(32, 'Broken pipe'))
>
> The test.pdf is 4.2M, when I post it as a file, I get ConnectionError
> Exception.
> But I use curl to post it to server, it can be uploaded successful.
>
> —
> Reply to this email directly or view it on GitHub
> https://github.com/kennethreitz/requests/issues/2422.
| kevinburke | 2015-01-23T15:48:01Z | https://github.com/psf/requests/issues/2422#issuecomment-71212727 | 0 |
psf/requests | http | 624 | Documentation Translation Guide | closed | 2012-05-19T14:53:39Z | 2022-02-26T06:00:22Z | https://github.com/psf/requests/issues/624 | Yes we need it because I might translate the documentation to arabic sometime
| kennethreitz | 59 | Yes we need it because I might translate the documentation to arabic sometime
| toutouastro | 2012-05-19T17:36:48Z | https://github.com/psf/requests/issues/624#issuecomment-5803427 | 1 |
|
psf/requests | http | 2,214 | verify=False and requests.packages.urllib3.disable_warnings() | As of 1.9 of `urllib3`, the following warning appears once per invocation:
```
/usr/local/lib/python2.7/site-packages/requests-2.4.0-py2.7.egg/requests/packages/urllib3/connectionpool.py:730: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html (This warning will only appear once by default.)
InsecureRequestWarning)
```
When using `verify=False` would it be useful to also set `requests.packages.urllib3.disable_warnings()`?
I understand that this is a design decision that not everyone might agree with. :)
| closed | 2014-09-09T15:23:03Z | 2017-01-21T16:42:04Z | https://github.com/psf/requests/issues/2214 | I think you can disable these at a global level with the `warnings` module. Further, to work with logging (if I remember correctly) you need to reach into `urllib3` (and we document that), so I'm not against documenting this for users who are not going to use certificate verification for HTTPS connections.
| invisiblethreat | 57 | I think you can disable these at a global level with the `warnings` module. Further, to work with logging (if I remember correctly) you need to reach into `urllib3` (and we document that), so I'm not against documenting this for users who are not going to use certificate verification for HTTPS connections.
| sigmavirus24 | 2014-09-09T15:36:37Z | https://github.com/psf/requests/issues/2214#issuecomment-54988365 | 0 |
psf/requests | http | 2,699 | timeout=False puts the socket in non-blocking mode, causing weird errors. | I'm receiving a strange connection error using Requests 2.7.0
connecting to an HTTP service as part of an integration test. Below is
the call stack when I call into `session.request` in the API (for
reasons of NDA I can't provide the rest of the callstack).
The error is below in full detail, but in short, send() in requests is emitting `ConnectionError: ('Connection aborted.', error(115, 'Operation now in progress'))`
Python=2.7
Requests=2.7.0
The parameters being passed in to this are:
verb=POST
url=(an HTTP url for an internal service)
data=(a dict)
verify=False
cert=None
headers={'Content-Encoding': 'amz-1.0', 'Connection': 'keep-alive', 'Accept': 'application/json, text/javascript, _/_', 'User-Agent': 'A thing', 'Host': 'hostname', 'Pragma': 'no-cache', 'Cache-Control': 'no-cache', 'Content-Type': 'application/json'} # plus some service headers used by our service
timeout=False
proxies=Nonefi
> package-cache/packages/Requests/lib/python2.7/site-packages/requests/api.py:50: in request
> response = session.request(method=method, url=url, *_kwargs)
> package-cache/packages/Requests/lib/python2.7/site-packages/requests/sessions.py:465: in request
> resp = self.send(prep, *_send_kwargs)
> package-cache/packages/Requests/lib/python2.7/site-packages/requests/sessions.py:573: in send
> r = adapter.send(request, **kwargs)
> ---
>
> self = <requests.adapters.HTTPAdapter object at 0x7ff845dccd50>, request = <PreparedRequest [POST]>, stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x7ff845dd0cd0>, verify = False, cert = None
> proxies = {}
>
> ```
> def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
> """Sends PreparedRequest object. Returns Response object.
>
> :param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
> :param stream: (optional) Whether to stream the request content.
> :param timeout: (optional) How long to wait for the server to send
> data before giving up, as a float, or a (`connect timeout, read
> timeout <user/advanced.html#timeouts>`_) tuple.
> :type timeout: float or tuple
> :param verify: (optional) Whether to verify SSL certificates.
> :param cert: (optional) Any user-provided SSL certificate to be trusted.
> :param proxies: (optional) The proxies dictionary to apply to the request.
> """
>
> conn = self.get_connection(request.url, proxies)
>
> self.cert_verify(conn, request.url, verify, cert)
> url = self.request_url(request, proxies)
> self.add_headers(request)
>
> chunked = not (request.body is None or 'Content-Length' in request.headers)
>
> if isinstance(timeout, tuple):
> try:
> connect, read = timeout
> timeout = TimeoutSauce(connect=connect, read=read)
> except ValueError as e:
> # this may raise a string formatting error.
> err = ("Invalid timeout {0}. Pass a (connect, read) "
> "timeout tuple, or a single float to set "
> "both timeouts to the same value".format(timeout))
> raise ValueError(err)
> else:
> timeout = TimeoutSauce(connect=timeout, read=timeout)
>
> try:
> if not chunked:
> resp = conn.urlopen(
> method=request.method,
> url=url,
> body=request.body,
> headers=request.headers,
> redirect=False,
> assert_same_host=False,
> preload_content=False,
> decode_content=False,
> retries=self.max_retries,
> timeout=timeout
> )
> # Send the request.
> else:
> if hasattr(conn, 'proxy_pool'):
> conn = conn.proxy_pool
>
> low_conn = conn._get_conn(timeout=timeout)
>
> try:
> low_conn.putrequest(request.method,
> url,
> skip_accept_encoding=True)
>
> for header, value in request.headers.items():
> low_conn.putheader(header, value)
>
> low_conn.endheaders()
>
> for i in request.body:
> low_conn.send(hex(len(i))[2:].encode('utf-8'))
> low_conn.send(b'\r\n')
> low_conn.send(i)
> low_conn.send(b'\r\n')
> low_conn.send(b'0\r\n\r\n')
>
> r = low_conn.getresponse()
> resp = HTTPResponse.from_httplib(
> r,
> pool=conn,
> connection=low_conn,
> preload_content=False,
> decode_content=False
> )
> except:
> # If we hit any problems here, clean up the connection.
> # Then, reraise so that we can handle the actual exception.
> low_conn.close()
> raise
> else:
> # All is well, return the connection to the pool.
> conn._put_conn(low_conn)
>
> except (ProtocolError, socket.error) as err:
> ```
>
> > ```
> > raise ConnectionError(err, request=request)
> > ```
> >
> > E ConnectionError: ('Connection aborted.', error(115, 'Operation now in progress'))
>
> cert = None
> chunked = False
> conn = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0x7ff844c68c10>
> err = ProtocolError('Connection aborted.', error(115, 'Operation now in progress'))
> proxies = {}
> request = <PreparedRequest [POST]>
> self = <requests.adapters.HTTPAdapter object at 0x7ff845dccd50>
> stream = False
> timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0x7ff845dd0cd0>
> url = '/'
> verify = False
>
> package-cache/packages/Requests/lib/python2.7/site-packages/requests/adapters.py:415: ConnectionError
| closed | 2015-07-30T20:33:15Z | 2021-09-08T14:00:37Z | https://github.com/psf/requests/issues/2699 | (Hi Chris!)
Well this is weird. Are you passing any socket options?
| offbyone | 56 | (Hi Chris!)
Well this is weird. Are you passing any socket options?
| Lukasa | 2015-07-30T20:36:12Z | https://github.com/psf/requests/issues/2699#issuecomment-126476835 | 0 |
psf/requests | http | 1,685 | Possible Memory Leak | I have a very simple program that periodically retrieves an image from an IP camera. I've noticed that the working set of this program grows monotonically. I've written a small program that reproduces the issue.
``` python
import requests
from memory_profiler import profile
@profile
def lol():
print "sending request"
r = requests.get('http://cachefly.cachefly.net/10mb.test')
print "reading.."
with open("test.dat", "wb") as f:
f.write(r.content)
print "Finished..."
if __name__=="__main__":
for i in xrange(100):
print "Iteration", i
lol()
```
The memory usage is printed at the end of each iteration. This is the sample output.
*\* Iteration 0 **
```
Iteration 0
sending request
reading..
Finished...
Filename: test.py
Line # Mem usage Increment Line Contents
================================================
5 12.5 MiB 0.0 MiB @profile
6 def lol():
7 12.5 MiB 0.0 MiB print "sending request"
8 35.6 MiB 23.1 MiB r = requests.get('http://cachefly.cachefly.net/10mb.test')
9 35.6 MiB 0.0 MiB print "reading.."
10 35.6 MiB 0.0 MiB with open("test.dat", "wb") as f:
11 35.6 MiB 0.0 MiB f.write(r.content)
12 35.6 MiB 0.0 MiB print "Finished..."
```
*\* Iteration 1 **
```
Iteration 1
sending request
reading..
Finished...
Filename: test.py
Line # Mem usage Increment Line Contents
================================================
5 35.6 MiB 0.0 MiB @profile
6 def lol():
7 35.6 MiB 0.0 MiB print "sending request"
8 36.3 MiB 0.7 MiB r = requests.get('http://cachefly.cachefly.net/10mb.test')
9 36.3 MiB 0.0 MiB print "reading.."
10 36.3 MiB 0.0 MiB with open("test.dat", "wb") as f:
11 36.3 MiB 0.0 MiB f.write(r.content)
12 36.3 MiB 0.0 MiB print "Finished..."
```
The memory usage does not grow with every iteration, but it does continue to creep up with `requests.get` being the culprit that increases memory usage.
By *\* Iteration 99 *\* this is what the memory profile looks like.
```
Iteration 99
sending request
reading..
Finished...
Filename: test.py
Line # Mem usage Increment Line Contents
================================================
5 40.7 MiB 0.0 MiB @profile
6 def lol():
7 40.7 MiB 0.0 MiB print "sending request"
8 40.7 MiB 0.0 MiB r = requests.get('http://cachefly.cachefly.net/10mb.test')
9 40.7 MiB 0.0 MiB print "reading.."
10 40.7 MiB 0.0 MiB with open("test.dat", "wb") as f:
11 40.7 MiB 0.0 MiB f.write(r.content)
12 40.7 MiB 0.0 MiB print "Finished..."
```
Memory usage doesn't drop unless the program is terminated.
Is there a bug or is it user error?
| closed | 2013-10-17T17:03:10Z | 2021-09-05T00:06:47Z | https://github.com/psf/requests/issues/1685 | Thanks for raising this and providing so much detail!
Tell me, do you ever see the memory usage go down at any point?
| philip-goh | 54 | Thanks for raising this and providing so much detail!
Tell me, do you ever see the memory usage go down at any point?
| Lukasa | 2013-10-18T06:14:10Z | https://github.com/psf/requests/issues/1685#issuecomment-26574258 | 0 |
psf/requests | http | 1,547 | Installation fails in Cygwin 64 bit | When I try a pip install requests in Cygwin 64 bit the installation fails.
Here's the end of the log file:
```
Downloading from URL https://pypi.python.org/packages/source/r/requests/requests-1.2.3.tar.gz#md5=adbd3f18445f7fe5e77f65c502e264fb (from https://pypi.python.org/simple/requests/)
Running setup.py egg_info for package requests
Cleaning up...
Removing temporary dir /cygdrive/d/home/greger/cygwin/projects/myproject/build...
No files/directories in /cygdrive/d/home/greger/cygwin/projects/myproject/build/requests/pip-egg-info (from PKG-INFO)
Exception information:
Traceback (most recent call last):
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/basecommand.py", line 134, in main
status = self.run(options, args)
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/commands/install.py", line 236, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 1139, in prepare_files
req_to_install.assert_source_matches_version()
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 394, in assert_source_matches_version
version = self.installed_version
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 390, in installed_version
return self.pkg_info()['version']
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 357, in pkg_info
data = self.egg_info_data('PKG-INFO')
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 293, in egg_info_data
filename = self.egg_info_path(filename)
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 330, in egg_info_path
raise InstallationError('No files/directories in %s (from %s)' % (base, filename))
InstallationError: No files/directories in /cygdrive/d/home/greger/cygwin/projects/myproject/build/requests/pip-egg-info (from PKG-INFO)
```
Don't know if that is a problem with the installation package, or cygwin itself. But thought I'd try here first.
If I try to use easy_install it just leaves all the files in the tmp-dir. And if I from there try to do a python setup.py install, and then try to import requests from the python shell, it just exits without and output.
| closed | 2013-08-21T22:00:09Z | 2021-09-09T00:00:57Z | https://github.com/psf/requests/issues/1547 | Hmm, I can't reproduce this. =( But I wonder if you're using a Windows virtualenv with a Windows interpreter in Cygwin. It's not clear (to me) how well that would work.
| gregersn | 54 | Hmm, I can't reproduce this. =( But I wonder if you're using a Windows virtualenv with a Windows interpreter in Cygwin. It's not clear (to me) how well that would work.
| Lukasa | 2013-08-27T08:26:05Z | https://github.com/psf/requests/issues/1547#issuecomment-23320245 | 0 |
psf/requests | http | 2,725 | Requests.put blocking on Windows (Tika-Python) | Over in the [Apache Tika Python Port](http://github.com/chrismattmann/tika-python/) I'm noticing in [tika-python#44](http://github.com/chrismattmann/tika-python/issues/44) and in [tika-python#58](http://github.com/chrismattmann/tika-python/issues/58) some odd behavior with requests on Python 2.7.9. For whatever reason, when using a file handle and putting it with requests.put it blocks and blocks until finally it gets (correctly) a BadStatusLine back after a timeout. Anyone else seen this?
| closed | 2015-08-14T16:22:14Z | 2021-09-08T21:00:56Z | https://github.com/psf/requests/issues/2725 | Any requests call that uploads a request body will send the entire body before attempting to read the response. That means that, if the remote end does not close that connection abruptly (throwing an Exception on our end), we'll block until the response has been entirely sent.
Sadly, we don't support the 100-continue flow at this time (because httplib has no way of letting us see what's going on there), so it's difficult for us to do anything else.
| chrismattmann | 52 | Any requests call that uploads a request body will send the entire body before attempting to read the response. That means that, if the remote end does not close that connection abruptly (throwing an Exception on our end), we'll block until the response has been entirely sent.
Sadly, we don't support the 100-continue flow at this time (because httplib has no way of letting us see what's going on there), so it's difficult for us to do anything else.
| Lukasa | 2015-08-14T19:31:59Z | https://github.com/psf/requests/issues/2725#issuecomment-131215508 | 0 |
psf/requests | http | 1,648 | Header "Transfer-Encoding: chunked" set even if Content-Length is provided which causes body to not actually get chunked | ## Test script
```
import requests
import time
def f():
yield b"lol"
time.sleep(2)
yield b"man"
requests.post('http://127.0.0.1:8801/', data=f(), headers={"Content-Length": 6})
```
## Actual result
Received on the server:
```
$ nc -p 8801 -l
POST / HTTP/1.1
Host: 127.0.0.1:8801
User-Agent: python-requests/2.0.0 CPython/3.3.1 Linux/3.11.0-031100rc4-generic
Accept: */*
Transfer-Encoding: chunked
Content-Length: 6
Accept-Encoding: gzip, deflate, compress
lolman
```
## Expected result
Did not expect "Transfer-Encoding: chunked" since I provided the Content-Length. If requests insists on doing chunked transfer encoding, it should disregard the content length and actually chunk the content (as it does if there is not Content-Length header given).
| closed | 2013-10-04T00:02:56Z | 2021-09-08T12:00:49Z | https://github.com/psf/requests/issues/1648 | I agree, we should pick one of those two options. =)
| ysangkok | 52 | I agree, we should pick one of those two options. =)
| Lukasa | 2013-10-04T08:54:09Z | https://github.com/psf/requests/issues/1648#issuecomment-25684368 | 0 |
psf/requests | http | 844 | umbrella ticket to resolve iteration / read size / chunked encoding questions | This ticket is intended to aggregate previous discussion from #539, #589, and #597 about the default value of `chunk_size` used by `iter_content` and `iter_lines`.
cc @mponton @gwrtheyrn @shazow
Issues:
1. The default read size of `iter_content` is 1 byte; this is probably inefficient
2. Requests does not expose the ability to read chunked encoding streams in the "correct" way, i.e., using the provided octet counts to tell how much to read.
3. However, this would not be suitable as the default implementation of `iter_content` anyway; not all websites are standards-compliant and when this was tried it caused more problems than it solved.
4. The current default read size for `iter_lines` is 10kB. This is high enough that iteration over lines can be perceived as unresponsive --- no lines are returned until all 10kB have been read.
5. There is no "correct" way to implement `iter_lines` using blocking I/O, we just have to bite the bullet and take a guess as to how much data we should read.
6. There's apparently some nondeterminism in `iter_lines`, I think because of the edge case where a read ends between a `\r` and a `\n`.
7. `iter_lines` is backed by `iter_content`, which operates on raw byte strings and splits at byte boundaries. I think there may be edge cases where we could split the body in the middle of a multi-byte encoding of a Unicode character.
My guess at a solution:
1. Set the default `chunk_size` to 1024 bytes, for both `iter_content` and `iter_lines`.
2. Provide a separate interface (possibly `iter_chunks`) for iterating over chunks of pages that are known to correctly implement chunked encoding, e.g., Twitter's firehose APIs
3. We may need our own implementation of `splitlines` that is deterministic with respect to our chunking boundaries, i.e., remembers if the last-read character was `\r` and suppresses a subsequent `\n`. We may also need to build in Unicode awareness at this level, i.e., decode as much of the body as is valid, then save any leftover invalid bytes to be prepended to the next chunk.
Comments and thoughts are much appreciated. Thanks for your time!
| closed | 2012-09-07T03:54:10Z | 2021-09-08T23:06:04Z | https://github.com/psf/requests/issues/844 | :+1: for `iter_chunks`
| slingamn | 52 | :+1: for `iter_chunks`
| kennethreitz | 2012-09-07T07:05:17Z | https://github.com/psf/requests/issues/844#issuecomment-8357784 | 0 |
psf/requests | http | 1,252 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 169: ordinal not in range(128) | Something similar has been posted before: https://github.com/kennethreitz/requests/issues/403
This is using `requests 1.1.0`
But this problem is still popping up while trying to post _just_ a file _and_ a file with data.
On top of the similar issue, I've posted about this before and in `requests_oauthlib` it has said to been fixed; If you wish, I'll try and find the issue in that lib, just too lazy to open a new tab now ;P
Error:
```
Traceback (most recent call last):
File "/Users/mikehelmick/.virtualenv/twython/lib/python2.7/site-packages/requests/sessions.py", line 340, in post
return self.request('POST', url, data=data, **kwargs)
File "/Users/mikehelmick/.virtualenv/twython/lib/python2.7/site-packages/requests/sessions.py", line 279, in request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
File "/Users/mikehelmick/.virtualenv/twython/lib/python2.7/site-packages/requests/sessions.py", line 374, in send
r = adapter.send(request, **kwargs)
File "/Users/mikehelmick/.virtualenv/twython/lib/python2.7/site-packages/requests/adapters.py", line 174, in send
timeout=timeout
File "/Users/mikehelmick/.virtualenv/twython/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 422, in urlopen
body=body, headers=headers)
File "/Users/mikehelmick/.virtualenv/twython/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 274, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 955, in request
self._send_request(method, url, body, headers)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 989, in _send_request
self.endheaders(body)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 951, in endheaders
self._send_output(message_body)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 809, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 169: ordinal not in range(128)
```
I posted a gist with sample code if any of you needed to test it. If you guys are really being lazy, I can post some app/user tokens for you to use (let me know).
Gist: https://gist.github.com/michaelhelmick/5199754
| closed | 2013-03-20T22:48:10Z | 2021-09-08T11:00:35Z | https://github.com/psf/requests/issues/1252 | So here's my first guess (but I'll obviously look into this more): I think that OAuth1 might be generating unicode and opening the file in binary form will cause this issue due to how httplib sends the message body. Also this may be related to #1250.
@Lukasa, opinions?
| michaelhelmick | 50 | So here's my first guess (but I'll obviously look into this more): I think that OAuth1 might be generating unicode and opening the file in binary form will cause this issue due to how httplib sends the message body. Also this may be related to #1250.
@Lukasa, opinions?
| sigmavirus24 | 2013-03-21T12:48:28Z | https://github.com/psf/requests/issues/1252#issuecomment-15235102 | 0 |
psf/requests | http | 1,906 | OpenSSL.SSL.Error: [('SSL routines', 'SSL3_GET_RECORD', 'decryption failed or bad record mac')] | It seems that latest requests (2.2.1) is also affected by bug:
OpenSSL.SSL.Error: [('SSL routines', 'SSL3_GET_RECORD', 'decryption failed or bad record mac')]
It seems to be an workaround here http://stackoverflow.com/questions/21497591/urllib2-reading-https-url-failure but I don't know how to apply it to requests.
| closed | 2014-02-07T10:31:07Z | 2021-09-08T08:00:38Z | https://github.com/psf/requests/issues/1906 | Thanks for this!
Yeah, this isn't really a request bug, as the SO question highlights: it's a Debian or OpenSSL bug.
With that said, a possible workaround would be an extension of the transport adapter demonstrated on my blog, here: https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/
| ssbarnea | 49 | Thanks for this!
Yeah, this isn't really a request bug, as the SO question highlights: it's a Debian or OpenSSL bug.
With that said, a possible workaround would be an extension of the transport adapter demonstrated on my blog, here: https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/
| Lukasa | 2014-02-07T11:12:15Z | https://github.com/psf/requests/issues/1906#issuecomment-34427401 | 0 |
psf/requests | http | 1,336 | Set cookie on 401 | If a 401 response contains a Set-Cookie header, that header is not returned on the subsequent AUTH.
An application requiring that cookie ( e.g. session id ) on the AUTH fails.
| closed | 2013-04-30T03:19:58Z | 2021-09-08T23:10:56Z | https://github.com/psf/requests/issues/1336 | `HTTPDigestAuth.handle_401()` needs to persist cookies. Thanks for reporting this! I'm marking this as contributor friendly, anyone who wants it should take it. I'm looking at @gazpachoking and @sigmavirus24, they're the cookie masters here.
| kervinpierre | 47 | `HTTPDigestAuth.handle_401()` needs to persist cookies. Thanks for reporting this! I'm marking this as contributor friendly, anyone who wants it should take it. I'm looking at @gazpachoking and @sigmavirus24, they're the cookie masters here.
| Lukasa | 2013-04-30T08:27:45Z | https://github.com/psf/requests/issues/1336#issuecomment-17215073 | 0 |
psf/requests | http | 771 | socket leak (suspected due to redirection) | It seems requests forgets to properly close a socket when a redirection occurs.
See the following gist for the code that causes the issue (at least here..):
https://gist.github.com/3313868
| closed | 2012-08-10T12:21:29Z | 2021-09-09T05:30:52Z | https://github.com/psf/requests/issues/771 | Have you tried the latest release? v0.13.6 fixed a bunch of socket leaking
| eranrund | 47 | Have you tried the latest release? v0.13.6 fixed a bunch of socket leaking
| kennethreitz | 2012-08-10T16:02:38Z | https://github.com/psf/requests/issues/771#issuecomment-7648855 | 0 |
psf/requests | http | 749 | TLS SNI Support | It seems I'm having problems using requests with servers that need TLS SNI support. When will requests have this feature? Is this something that is planned?
Thanks,
--Ram
| closed | 2012-08-01T21:40:20Z | 2021-02-14T17:04:28Z | https://github.com/psf/requests/issues/749 | Not posisble in Python 2.x on top of the stdlib's ssl module.
| pythonmobile | 47 | Not posisble in Python 2.x on top of the stdlib's ssl module.
| mitsuhiko | 2012-08-01T22:29:13Z | https://github.com/psf/requests/issues/749#issuecomment-7442115 | 0 |
psf/requests | http | 1,390 | Async mode seems to have disappeared | Hey,
I'm trying to find some kind of documentation about how to issue multiple concurrent HTTP requests using this library, but all the Googling and searching I'm doing points to outdated versions of this library. Is there a chance you can add back the docs? Even if they just mention the features are deprecated and point users in the correct location, it's probably useful.
Best,
Kevin
| closed | 2013-05-26T21:10:52Z | 2024-05-20T14:36:04Z | https://github.com/psf/requests/issues/1390 | Hi Kevin, thanks for raising this issue!
I'd have to say that [grequests](https://github.com/kennethreitz/grequests) is likely to be the right way to go about this. It might be worth grequests to the documentation, in fact. I'll have a think about it. =)
| kevinburke | 46 | Hi Kevin, thanks for raising this issue!
I'd have to say that [grequests](https://github.com/kennethreitz/grequests) is likely to be the right way to go about this. It might be worth grequests to the documentation, in fact. I'll have a think about it. =)
| Lukasa | 2013-05-26T21:16:21Z | https://github.com/psf/requests/issues/1390#issuecomment-18469864 | 0 |
psf/requests | http | 2,338 | Proxy not working | This code will work on my personal laptop, but then when I move to a windows server where we have to use a proxy to access the outside world, I get a 407 error. I added the proxies to this (just like I have setup in my Internet explorer settings that work). Any ideas?
import requests
proxies = {
"http": "10.51.1.140:8080",
"https": "10.51.1.140:8080",
}
r = requests.get('https://epfws.usps.gov/ws/resources/epf/version', proxies=proxies)
print(r.text)
Here is what I get:
G:\Python34>python uspsver.py
Traceback (most recent call last):
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\packages
\urllib3\connectionpool.py", line 511, in urlopen
conn = self._get_conn(timeout=pool_timeout)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\packages
\urllib3\connectionpool.py", line 231, in _get_conn
return conn or self._new_conn()
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\packages
\urllib3\connectionpool.py", line 712, in _new_conn
return self._prepare_conn(conn)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\packages
\urllib3\connectionpool.py", line 685, in _prepare_conn
conn.connect()
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\packages
\urllib3\connection.py", line 208, in connect
self._tunnel()
File "G:\Python34\lib\http\client.py", line 822, in _tunnel
message.strip()))
OSError: Tunnel connection failed: 407 Proxy Authentication Required
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\adapters
.py", line 364, in send
timeout=timeout
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\packages
\urllib3\connectionpool.py", line 559, in urlopen
_pool=self, _stacktrace=stacktrace)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\packages
\urllib3\util\retry.py", line 265, in increment
raise MaxRetryError(_pool, url, error)
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='ep
fws.usps.gov', port=443): Max retries exceeded with url: /ws/resources/epf/versi
on (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection
failed: 407 Proxy Authentication Required',)))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "uspsver.py", line 5, in <module>
r = requests.get('https://epfws.usps.gov/ws/resources/epf/version', proxies=
proxies)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\api.py",
line 65, in get
return request('get', url, *_kwargs)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\api.py",
line 49, in request
response = session.request(method=method, url=url, *_kwargs)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\sessions
.py", line 459, in request
resp = self.send(prep, *_send_kwargs)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\sessions
.py", line 571, in send
r = adapter.send(request, *_kwargs)
File "G:\Python34\lib\site-packages\requests-2.4.3-py3.4.egg\requests\adapters
.py", line 415, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='epfws.usps.gov',
port=443): Max retries exceeded with url: /ws/resources/epf/version (Caused by P
roxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Pro
xy Authentication Required',)))
G:\Python34>
| closed | 2014-11-13T17:13:41Z | 2021-09-05T00:06:57Z | https://github.com/psf/requests/issues/2338 | The error here is extremely clear: _407 Proxy Authentication Required_.
You need to send some user credentials to the proxy. The real risk here is that the proxy is a Windows domain auth proxy, in which case this will be extremely difficult.
| dmorri | 45 | The error here is extremely clear: _407 Proxy Authentication Required_.
You need to send some user credentials to the proxy. The real risk here is that the proxy is a Windows domain auth proxy, in which case this will be extremely difficult.
| Lukasa | 2014-11-13T17:19:59Z | https://github.com/psf/requests/issues/2338#issuecomment-62930231 | 0 |
psf/requests | http | 3,213 | Importing requests is fairly slow | We are using requests in our projects, and it is working great. Unfortunately, for our CLI tools, using requests is an issue because it is slow to import. E.g. on my 2014 macbook:
```
python -c "import requests"
```
Takes close to 90 ms.
Is optimizing import time worth of consideration for the project ?
| closed | 2016-05-21T12:23:08Z | 2021-09-08T07:00:40Z | https://github.com/psf/requests/issues/3213 | This is almost certainly the result of CFFI. Can you print what libraries you have installed in your environment (`pip freeze`)?
| cournape | 44 | This is almost certainly the result of CFFI. Can you print what libraries you have installed in your environment (`pip freeze`)?
| Lukasa | 2016-05-21T12:26:19Z | https://github.com/psf/requests/issues/3213#issuecomment-220775062 | 0 |
psf/requests | http | 2,935 | KeyError in connectionpool? | I'm getting a weird error during this call `conn = old_pool.get(block=False)` in connectionpool.py:410.
The Error in Sentry is: `Queue in get, KeyError: (1, True)`
And this is the traceback, which doesn't make sense, as `requests/packages/urllib3/connectionpool.py` catches that Empty exception; plus I have no idea what that KeyError is coming from ... any ideas?:
``` python
Stacktrace (most recent call last):
File "pac/business/views.py", line 49, in get_eticket
response = requests.get(url, timeout=timeout, verify=False)
File "requests/api.py", line 69, in get
return request('get', url, params=params, **kwargs)
File "requests/api.py", line 54, in request
session.close()
File "requests/sessions.py", line 649, in close
v.close()
File "requests/adapters.py", line 264, in close
self.poolmanager.clear()
File "requests/packages/urllib3/poolmanager.py", line 99, in clear
self.pools.clear()
File "requests/packages/urllib3/_collections.py", line 93, in clear
self.dispose_func(value)
File "requests/packages/urllib3/poolmanager.py", line 65, in <lambda>
dispose_func=lambda p: p.close())
File "requests/packages/urllib3/connectionpool.py", line 410, in close
conn = old_pool.get(block=False)
File "python2.7/Queue.py", line 165, in get
raise Empty
```
| closed | 2015-12-18T00:38:06Z | 2021-09-08T19:00:30Z | https://github.com/psf/requests/issues/2935 | @Kronuz What version of requests are you using, please?
| Kronuz | 44 | @Kronuz What version of requests are you using, please?
| Lukasa | 2015-12-18T08:59:59Z | https://github.com/psf/requests/issues/2935#issuecomment-165717762 | 0 |
psf/requests | http | 465 | no way to read uncompressed content as file-like object | According to the documentation, there are three ways to read the content of the response: `.text`, `.content` and `.raw`. The first two consider the transfer encoding and decompress the stream automatically when producing their in-memory result. However, especially for the case that the result is large, there is currently no simple way to get at the decompressed result in the form of a file-like object, e.g. to pass it straight into an XML or Json parser.
From the point of view of a library that aims to make HTTP requests user friendly, why should a user have to care about something as low-level as the compression type of the stream that was internally negotiated between the web server and the library? After all, it's the library's "fault" if it defaults to accepting such a stream. In this light, the `.raw` stream is a bit too raw for my taste.
Maybe a fourth property like `.stream` might provide a better abstraction level?
| closed | 2012-02-29T18:14:13Z | 2021-09-09T04:00:38Z | https://github.com/psf/requests/issues/465 | `Response.iter_content`
| scoder | 44 | `Response.iter_content`
| kennethreitz | 2012-02-29T18:15:07Z | https://github.com/psf/requests/issues/465#issuecomment-4243324 | 0 |
psf/requests | http | 3,948 | Requests not working in a Docker container | Hi
Thats the old story about SSL not working with requests, but one step further.... Docker containers
I have an application that uses requests, and it works fine in my local machine, but, when deploying it in a Docker container, i am having an error with requests module (SSL error)
[2017-03-31 11:32:29,863] ERROR in app: Exception on /send [POST]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "app.py", line 62, in sendrequest
response=sess.post(url,params, headers=h,verify=False)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 535, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 497, in send
raise SSLError(e, request=request)
SSLError: ("bad handshake: SysCallError(-1, 'Unexpected EOF')",)
I have heard it might be related to openSSL. Any idea on how this can be resolved? Should i include any dependency?
| closed | 2017-03-31T11:38:41Z | 2021-11-03T19:00:30Z | https://github.com/psf/requests/issues/3948 | Can you run `openssl version` in your container? | javixeneize | 42 | Can you run `openssl version` in your container? | sigmavirus24 | 2017-03-31T11:40:28Z | https://github.com/psf/requests/issues/3948#issuecomment-290690219 | 0 |
psf/requests | http | 2,424 | Consider Requests' Inclusion in Python 3.5's Standard Library | There's a lot to this, but I'll keep it simple...
Would the Python community, as a whole, benefit from Requests being added into the standard library?
Would love to hear your thoughts and opinions on the subject!
| closed | 2015-01-25T17:17:59Z | 2021-09-08T22:00:48Z | https://github.com/psf/requests/issues/2424 | Yes, because it simplifies the entire process and does not sacrifice the performance. So yes.
| kennethreitz | 42 | Yes, because it simplifies the entire process and does not sacrifice the performance. So yes.
| gamesbrainiac | 2015-01-25T17:22:19Z | https://github.com/psf/requests/issues/2424#issuecomment-71382004 | 0 |
psf/requests | http | 2,409 | Consecutive requests with Session() raises 404 | In [1]: import requests
In [2]: s = requests.Session()
In [3]: s.get('https://www.flavourly.com/start-subscription/12/', verify=False)
Out[3]: <Response [200]>
In [4]: s.get('https://www.flavourly.com/start-subscription/12/', verify=False)
Out[4]: <Response [404]>
Is it me or is this a bug? I've checked and it only happens in versions higher than 2.3.0.
| closed | 2015-01-15T20:38:19Z | 2021-09-08T01:21:19Z | https://github.com/psf/requests/issues/2409 | I don't think it's a bug, I think it's because of cookies. Flavourly doesn't seem to like the cookies we're sending back. Try checking the difference between the value of `s.cookies` after the first request, once in something where you don't see the 'bug', and once in a version where you do.
| RossLote | 42 | I don't think it's a bug, I think it's because of cookies. Flavourly doesn't seem to like the cookies we're sending back. Try checking the difference between the value of `s.cookies` after the first request, once in something where you don't see the 'bug', and once in a version where you do.
| Lukasa | 2015-01-15T20:46:42Z | https://github.com/psf/requests/issues/2409#issuecomment-70158467 | 0 |
psf/requests | http | 2,467 | timeout issues for chunked responses | When I set a timeout value for my simple GET request, the `requests.get` call takes a really long time to complete when I don't iterate over the returned content. If I iterate over the content, the `requests.get` call comes back very quickly. I've delved pretty deep into the requests source code and can't figure out what the issue is.
Here is a test case to illustrate my issue: https://gist.github.com/zsalzbank/90ebcaaf94f6c34e9559
What I would expect to happen is that the non-streamed and non-iterated responses would return just as quickly as all the other tests, but instead, they take 25x longer. When the timeout is removed, the responses come back quickly as well.
From my understanding of the timeout parameter, it is only timing out for the initial data response from the server. I know that some data comes back quickly because I modified the requests library and printed out the data as soon as it comes back (added a `print` after this line: https://github.com/kennethreitz/requests/blob/master/requests/models.py#L655).
What is going on here? Am I doing something wrong on the server side? I know that no `Content-Length` comes back from the server, but `Connection: close` does come back. I'm not sure if this is related to #1041 (it sounds similar).
| closed | 2015-03-02T23:11:27Z | 2021-09-08T23:05:58Z | https://github.com/psf/requests/issues/2467 | Can you post your values? Because I don't see this at all:
T=10, stream iter took: 0.515799999237
T=10, stream all took: 0.428910970688
T=10, no stream took: 0.423730134964
T=None, stream iter took: 0.422012805939
T=None, stream all took: 0.445449113846
T=None, no stream took: 0.433798074722
| zsalzbank | 41 | Can you post your values? Because I don't see this at all:
T=10, stream iter took: 0.515799999237
T=10, stream all took: 0.428910970688
T=10, no stream took: 0.423730134964
T=None, stream iter took: 0.422012805939
T=None, stream all took: 0.445449113846
T=None, no stream took: 0.433798074722
| Lukasa | 2015-03-03T07:53:30Z | https://github.com/psf/requests/issues/2467#issuecomment-76901637 | 0 |
psf/requests | http | 2,371 | requests has poor performance streaming large binary responses | https://github.com/alex/http-client-bench contains the benchmarks I used.
The results are something like:
| | requests/http | socket |
| --- | --- | --- |
| CPython | 12MB/s | 200MB/s |
| PyPy | 80MB/s | 300MB/s |
| Go | 150MB/s | n/a |
requests imposes a considerable overhead compared to a socket, particularly on CPython.
| closed | 2014-12-05T00:37:40Z | 2021-09-08T04:00:55Z | https://github.com/psf/requests/issues/2371 | That overhead is unexpectedly large. However, avoiding it might be tricky.
The big problem is that we do quite a lot of processing per chunk. That's all the way down the stack: requests, urllib3 and httplib. It would be extremely interesting to see where the time is being spent to work out who is causing the inefficiency.
| alex | 40 | That overhead is unexpectedly large. However, avoiding it might be tricky.
The big problem is that we do quite a lot of processing per chunk. That's all the way down the stack: requests, urllib3 and httplib. It would be extremely interesting to see where the time is being spent to work out who is causing the inefficiency.
| Lukasa | 2014-12-05T01:01:23Z | https://github.com/psf/requests/issues/2371#issuecomment-65732050 | 0 |
psf/requests | http | 1,622 | Requests 2.0.0 breaks SSL proxying via https_port of Squid. | For a Squid configuration of:
```
http_port 3128
https_port 3129 cert=/usr/local/opt/squid/etc/ssl/squid.crt key=/usr/local/opt/squid/etc/ssl/squid.key
```
And test names where:
The 'http_port' annotation indicates whether Squid HTTP port was used.
The 'https_port' annotation indicates whether Squid HTTPS port was used.
The 'scheme' annotation in test names refers to whether the scheme was part of proxy definition in proxies dictionary passed to requests
```
no_scheme --> { 'https': 'localhost:3129' }
http_scheme --> { 'https': 'http://localhost:3129' }
https_scheme --> { 'https': 'https://localhost:3129' }
```
Results for different requests versions are:
```
requests 0.9.3 1.2.3 2.0.0
no_proxy PASS PASS PASS
http_port_no_scheme PASS FAIL FAIL
http_port_with_http_scheme PASS PASS PASS
http_port_with_https_scheme FAIL FAIL PASS
https_port_no_scheme FAIL PASS FAIL
https_port_with_http_scheme FAIL FAIL FAIL
https_port_with_https_scheme PASS PASS FAIL
```
The problem one I am concerned about is https_port_with_https_scheme as this no longer works any more.
I fully realise that http_port_with_https_scheme now works and that this presumably would use CONNECT as is the safest option, but we would have to notify all customers relying on https_port_with_https_scheme that what they used before no longer works and they will need to change their configuration they have. If they don't heed that instruction, then we will start see failed connections and complaints.
BTW, it would be really good if the documentation explained the differences between all the combinations, what mechanisms they use and what represents best practice for being the safest/best to use.
Test used for https_port_with_https_scheme is:
```
import unittest
import requests
PROXY_HOST = 'localhost'
PROXY_HTTP_PORT = 3128
PROXY_HTTPS_PORT = 3129
REMOTE_URL = 'https://pypi.python.org/pypi'
class TestProxyingOfSSLRequests(unittest.TestCase):
def test_proxy_via_squid_https_port_with_https_scheme(self):
proxies = { 'https': 'https://%s:%s' % (PROXY_HOST, PROXY_HTTPS_PORT) }
response = requests.get(REMOTE_URL, proxies=proxies)
self.assertTrue(len(response.content) != 0)
if __name__ == '__main__':
unittest.main()
```
For full set of tests and results see:
- https://dl.dropboxusercontent.com/u/22571016/requests-ssl-proxy.tar
Cheat sheet for setting up Squid on MacOS X with SSL support from our own cheat sheet is:
To test proxying via SSL, the easiest thing to do is install 'squid' via 'brew' under MacOSX, but avoid the standard recipe and instead use:
```
brew install https://raw.github.com/mxcl/homebrew/a7bf4c381f4e38c24fb23493a92851ea8339493e/Library/Formula/squid.rb
```
This will install 'squid' with SSL support.
You can then generate a self signed certificate to use:
```
cd /usr/local/opt/squid/etc
mkdir ssl
cd ssl
openssl genrsa -des3 -out squid.key 1024
openssl req -new -key squid.key -out squid.csr
cp squid.key squid.key.org
openssl rsa -in squid.key.org -out squid.key
openssl x509 -req -days 365 -in squid.csr -signkey squid.key -out squid.crt
```
Then edit the the squid configuration file at '/usr/local/opt/squid/etc/squid.conf', adding:
```
https_port 3129 cert=/usr/local/opt/squid/etc/ssl/squid.crt key=/usr/local/opt/squid/etc/ssl/squid.key
```
Then use configuration of:
```
proxy_host = localhost
proxy_port = 3129
```
| closed | 2013-09-25T03:16:36Z | 2021-09-08T23:06:05Z | https://github.com/psf/requests/issues/1622 | Unless someone else beats me to it, I'll probably take a look at duplicating this weekend. I have pretty much no context on the proxy work though so if anyone with some background on that wants to pair on this let me know.
| GrahamDumpleton | 40 | Unless someone else beats me to it, I'll probably take a look at duplicating this weekend. I have pretty much no context on the proxy work though so if anyone with some background on that wants to pair on this let me know.
| sigmavirus24 | 2013-09-25T03:25:34Z | https://github.com/psf/requests/issues/1622#issuecomment-25059873 | 0 |
psf/requests | http | 968 | Switch to Apache 2 | I've been wanting to do this for a long time.
I don't care bout incompatibility with GPLv2.
| closed | 2012-11-27T18:39:54Z | 2021-09-09T05:30:45Z | https://github.com/psf/requests/issues/968 | Evaluate licenses of vendored libraries (NOTICES) for compatibility.
| kennethreitz | 40 | Evaluate licenses of vendored libraries (NOTICES) for compatibility.
| kennethreitz | 2012-11-27T18:40:33Z | https://github.com/psf/requests/issues/968#issuecomment-10770642 | 0 |
psf/requests | http | 6,432 | The latest version of requests (2.29.0) does not support urllib3 2.0.0 | ## The latest version of ``requests`` (``2.29.0``) does not support ``urllib3`` ``2.0.0``
``urllib3`` ``2.0.0`` was just released: https://github.com/urllib3/urllib3/releases/tag/2.0.0
But currently ``requests`` ``2.29.0`` has a range bound on it: ``<1.27 and >=1.21.1`` for ``urllib3``.
If you try to install a package that has ``urllib3==2.0.0`` as a dependency (while using the latest version of ``requests``), there will be errors:
```
<PACKAGE> depends on urllib3==2.0.0
requests 2.29.0 depends on urllib3<1.27 and >=1.21.1
```
Expecting ``requests`` to support the latest version of ``urllib3``.
(For Python 3.7 or newer)
| closed | 2023-04-26T17:53:09Z | 2023-08-12T19:30:38Z | https://github.com/psf/requests/issues/6432 | Hi @mdmintz, this is intentional as discussed [here](https://github.com/psf/requests/pull/6430#issuecomment-1522542220). We'll move the pin once we get more data points on any issues in the major version bump. We have a responsibility to keep the majority of the Python ecosystem stable during the transition. | mdmintz | 39 | Hi @mdmintz, this is intentional as discussed [here](https://github.com/psf/requests/pull/6430#issuecomment-1522542220). We'll move the pin once we get more data points on any issues in the major version bump. We have a responsibility to keep the majority of the Python ecosystem stable during the transition. | nateprewitt | 2023-04-26T17:55:32Z | https://github.com/psf/requests/issues/6432#issuecomment-1523829661 | 18 |
psf/requests | http | 3,052 | Why default to simplejson? | I've recently run into an issue where another library installed simple-json and because the requests library defaults to it if available, it caused all of our json requests to fail due to a decoding problem.
I'm unclear why requests even defaults to simple-json anymore. I'd be happy to contribute to a PR to make the json library for requests more controllable but wanted to submit an issue first. Or perhaps there is another way I'm unaware of that would allow more control over which json library requests will use.
| closed | 2016-03-15T20:55:07Z | 2021-09-03T00:10:50Z | https://github.com/psf/requests/issues/3052 | Thanks for this report! Please see issue #2516 for the last time this was discussed.
| digitaldavenyc | 39 | Thanks for this report! Please see issue #2516 for the last time this was discussed.
| Lukasa | 2016-03-15T20:59:14Z | https://github.com/psf/requests/issues/3052#issuecomment-197016920 | 0 |
psf/requests | http | 2,982 | Error 404 for url, that contains relative path parts | Browser & other tools seem to ignore trailing dot in the URL. Unfortunately `requests` does not.
Compare
```
$ curl -s https://github.com/. -o /dev/null -w "%{http_code}"; echo
200
```
With
```
$ python -c 'import requests; print(requests.get("https://github.com/.").status_code)'
404
```
| closed | 2016-01-27T20:55:05Z | 2024-05-20T14:34:59Z | https://github.com/psf/requests/issues/2982 | requests/api.py - request()
```
with sessions.Session() as session:
return session.request(method=method, url=url.rstrip("."), **kwargs)
```
Not sure if this matches any style guides, but should help solve this issue
| Lol4t0 | 39 | requests/api.py - request()
```
with sessions.Session() as session:
return session.request(method=method, url=url.rstrip("."), **kwargs)
```
Not sure if this matches any style guides, but should help solve this issue
| StewPoll | 2016-01-27T21:59:11Z | https://github.com/psf/requests/issues/2982#issuecomment-175881337 | 0 |
psf/requests | http | 2,911 | Can't use session proxy in its request for HTTPS protocol | I've been struggling with my company proxy to make an https request.
import requests
from requests.auth import HTTPProxyAuth
proxy_string = 'http://user:password@url_proxt:port_proxy'
s = requests.Session()
s.proxies = {"http": proxy_string , "https": proxy_string}
s.auth = HTTPProxyAuth(user,password)
r = s.get('http://www.google.com') # OK
print(r.text)
r = s.get('https://www.google.com',proxies={"http": proxy_string , "https": proxy_string}) #OK
print(r.text)
r = s.get('https://www.google.com') # KO
print(r.text)
When KO, I have the following exception :
HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',)))
I looked online but didn't find someone having this specific issue with HTTPS.
Thank you for your time
Description of issue here :
http://stackoverflow.com/questions/34025964/python-requests-api-using-proxy-for-https-request-get-407-proxy-authentication-r
| closed | 2015-12-01T19:55:01Z | 2021-09-08T20:01:01Z | https://github.com/psf/requests/issues/2911 | Have you tried the same request without setting `s.auth`?
| FabriceSh44 | 39 | Have you tried the same request without setting `s.auth`?
| Lukasa | 2015-12-01T22:34:16Z | https://github.com/psf/requests/issues/2911#issuecomment-161117977 | 0 |
psf/requests | http | 2,543 | OpenSSL.SSL.SysCallError: (104, 'Connection reset by peer') | I'm using the python package `requests` to send requests to https://mobile.twitter.com/username/following.
At first, I encounter the exception : requests.exceptions.SSLError: [Errno 8] _ssl.c:504: EOF occurred in violation of protocol. To solve that, I follow this solution to write a SSLAdapter to specify PROTOCOL_TLSv1.
After that, I encounter another exception : requests.exceptions.SSLError: [Errno bad handshake] (-1, 'Unexpected EOF’). And, I found this, but i send request and receive data in the same process.
And then, I use `requests` to send requests to https://api.twitter.com/1.1/friends/ids.json. The second exception is gone(still didn't understand why). But I encounter third exception : OpenSSL.SSL.SysCallError: (104, 'Connection reset by peer'). I found [this](http://stackoverflow.com/questions/383738/104-connection-reset-by-peer-socket-error-or-when-does-closing-a-socket-resu) in SO. And I add `time.sleep(10)` before send requests. But the third exception still happen.
So the second and the third exception still happen. Maybe the response content is too big too read? Or it's the problem of twitter server(some SO [solutions](http://stackoverflow.com/questions/23397460/error-handling-boto-error-104-connection-reset-by-peer) said about it ).
| closed | 2015-04-10T01:31:08Z | 2021-09-08T23:00:52Z | https://github.com/psf/requests/issues/2543 | All of those exceptions indicate that Twitter is closing the connection on you. You should check whether your data is valid.
| stamaimer | 39 | All of those exceptions indicate that Twitter is closing the connection on you. You should check whether your data is valid.
| Lukasa | 2015-04-10T02:22:23Z | https://github.com/psf/requests/issues/2543#issuecomment-91404503 | 0 |
psf/requests | http | 1,910 | 100% processor usage during GET have to wait 60s for response | When GET request have to wait 60s for remote service response, processor usage increases to 100% - version 1 of "requests" worked in this case better.
GET is configured with "cert" data and "timeout=120" over SSL connection.
| closed | 2014-02-11T11:11:44Z | 2021-09-08T23:08:01Z | https://github.com/psf/requests/issues/1910 | Do you have a publicly-accessible URL that I can test against?
| e-manuel | 39 | Do you have a publicly-accessible URL that I can test against?
| Lukasa | 2014-02-11T11:13:22Z | https://github.com/psf/requests/issues/1910#issuecomment-34744987 | 0 |
psf/requests | http | 1,198 | max-retries-exceeded exceptions are confusing | hi,
for example:
```
>>> requests.get('http://localhost:1111')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "requests/sessions.py", line 312, in request
resp = self.send(prep, **send_kwargs)
File "requests/sessions.py", line 413, in send
r = adapter.send(request, **kwargs)
File "requests/adapters.py", line 223, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=1111): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 61] Connection refused)
```
(assuming nothing is listening on port 1111)
the exception says "Max retries exceeded". i found this confusing because i did not specify any retry-related params. in fact, i am unable to find any documentation about specifying the retry-count. after going through the code, it seems that urllib3 is the underlying transport, and it is called with max_retries=0 (so in fact there are no retries). and requests simply wraps the exception. so it is understandable, but it confuses the end-user (end-developer)? i think something better should be done here, especially considering that it is very easy to get this error.
| closed | 2013-02-15T14:59:41Z | 2018-12-27T14:25:44Z | https://github.com/psf/requests/issues/1198 | Requests wraps the exception for the users convenience. The original exception is part of the message although the Traceback is misleading. I'll think about how to improve this.
| gabor | 39 | Requests wraps the exception for the users convenience. The original exception is part of the message although the Traceback is misleading. I'll think about how to improve this.
| sigmavirus24 | 2013-02-15T15:35:56Z | https://github.com/psf/requests/issues/1198#issuecomment-13611915 | 0 |
psf/requests | http | 281 | Cookies need to be CookieJar-backed again. | closed | 2011-11-19T17:59:54Z | 2021-09-09T09:00:31Z | https://github.com/psf/requests/issues/281 | k/v only doesn't account for same keys with different paths.
| kennethreitz | 39 | k/v only doesn't account for same keys with different paths.
| kennethreitz | 2011-11-19T18:46:53Z | https://github.com/psf/requests/issues/281#issuecomment-2800227 | 0 |
|
psf/requests | http | 4,076 | ValueError: not enough values to unpack (expected 3, got 1) | When using requests, I've just upgraded and I'm having the following error:
`import requests as rq`
`File "C:\Users\_HOME_\AppData\Local\Programs\Python\Python36\lib\site-packages\requests\__init__.py", line 53, in <module>`
`major, minor, patch= urllib3_version`
`ValueError: not enough values to unpack (expected 3, got 1)` | closed | 2017-05-27T14:47:31Z | 2021-09-08T09:00:36Z | https://github.com/psf/requests/issues/4076 | can you give me the output of `pip freeze`? | Javinator9889 | 38 | can you give me the output of `pip freeze`? | kennethreitz | 2017-05-27T14:54:29Z | https://github.com/psf/requests/issues/4076#issuecomment-304457116 | 0 |
psf/requests | http | 3,099 | overall timeout | We already make great use of the timeout parameter which allows setting per TCP transaction timeouts. This is very helpful! However, we also need to support an overall timeout across the connection. Reading the [docs on timeouts](http://docs.python-requests.org/en/master/user/quickstart/#timeouts) I see this isn't currently supported, and searching through the issues at least a bit back I didn't see another request for this feature -- excuse me if there is.
I realize we can set timers in our library to accomplish this, but I'm concerned about the additional overhead (one per thread, and we may have many) as well as any adverse effects to connection pooling if we end up needing to abort a request. Is there a good way to abort a request in the first place? I didn't see anything obvious in the docs.
So: Long term, it would be great if we could add overall timeout to the requests library. Short term, is there a recommended way of implementing this on my end?
| closed | 2016-04-15T22:00:39Z | 2021-09-08T18:00:47Z | https://github.com/psf/requests/issues/3099 | @jribbens There are a few problems with this.
Part 1 is that the complexity of such a patch is very high. To get it to behave correctly you need to repeatedly change timeouts at the socket level. This means that the patch needs to be passed pervasively though httplib, which we've already patched more than we'd like to. Essentially, we'd need to be reaching into httplib and reimplementing about 50% of its more complex methods in order to achieve this functional change.
Part 2 is that the maintenance of such a patch is relatively burdensome. We'd likely need to start maintaining what amounts to a parallel fork of httplib (more properly http.client at this time) in order to successfully do it. Alternatively, we'd need to take on the maintenance burden of a different HTTP stack that is more amenable to this kind of change. This part is, I suspect, commonly missed by those who wish to have such a feature: the cost of implementing it is high, but that is _nothing_ compared to the ongoing maintenance costs of supporting such a feature on all platforms.
Part 3 is that the advantage of such a patch is unclear. It has been my experience that most people who want a total timeout patch are not thinking entirely clearly about what they want. In most cases, total timeout parameters end up having the effect of killing perfectly good requests for no reason.
For example, suppose you've designed a bit of code that downloads files, and you'd like to handle hangs. While it's initially tempting to want to set a flat total timeout ("no request may take more than 30 seconds!"), such a timeout misses the point. For example, if a file changes from being 30MB to being 30GB in size, such a file can _never_ download in that kind of time interval, even though the download may be entirely healthy.
Put another way, total timeouts are an attractive nuisance: they appear to solve a problem, but they don't do it effectively. A more useful approach, in my opinion, is to take advantage of the per-socket-action timeout, combined with `stream=True` and `iter_content`, and assign yourself timeouts for chunks of data. The way `iter_content` works, flow of control will be returned to your code in a somewhat regular interval. That means that you can set yourself socket-level timeouts (e.g. 5s) and then `iter_content` over fairly small chunks (e.g. 1KB of data) and be relatively confident that unless you're being actively attacked, no denial of service is possible here. If you're really worried about denial of service, set your socket-level timeout much lower and your chunk size smaller (0.5s and 512 bytes) to ensure that you're regularly having control flow handed back to you.
The upshot of all this is that I believe that total timeouts are a misfeature in a library like this one. The best kind of timeout is one that is tuned to allow large responses enough time to download in peace, and such a timeout is best served by socket-level timeouts and `iter_content`.
| emgerner-msft | 38 | @jribbens There are a few problems with this.
Part 1 is that the complexity of such a patch is very high. To get it to behave correctly you need to repeatedly change timeouts at the socket level. This means that the patch needs to be passed pervasively though httplib, which we've already patched more than we'd like to. Essentially, we'd need to be reaching into httplib and reimplementing about 50% of its more complex methods in order to achieve this functional change.
Part 2 is that the maintenance of such a patch is relatively burdensome. We'd likely need to start maintaining what amounts to a parallel fork of httplib (more properly http.client at this time) in order to successfully do it. Alternatively, we'd need to take on the maintenance burden of a different HTTP stack that is more amenable to this kind of change. This part is, I suspect, commonly missed by those who wish to have such a feature: the cost of implementing it is high, but that is _nothing_ compared to the ongoing maintenance costs of supporting such a feature on all platforms.
Part 3 is that the advantage of such a patch is unclear. It has been my experience that most people who want a total timeout patch are not thinking entirely clearly about what they want. In most cases, total timeout parameters end up having the effect of killing perfectly good requests for no reason.
For example, suppose you've designed a bit of code that downloads files, and you'd like to handle hangs. While it's initially tempting to want to set a flat total timeout ("no request may take more than 30 seconds!"), such a timeout misses the point. For example, if a file changes from being 30MB to being 30GB in size, such a file can _never_ download in that kind of time interval, even though the download may be entirely healthy.
Put another way, total timeouts are an attractive nuisance: they appear to solve a problem, but they don't do it effectively. A more useful approach, in my opinion, is to take advantage of the per-socket-action timeout, combined with `stream=True` and `iter_content`, and assign yourself timeouts for chunks of data. The way `iter_content` works, flow of control will be returned to your code in a somewhat regular interval. That means that you can set yourself socket-level timeouts (e.g. 5s) and then `iter_content` over fairly small chunks (e.g. 1KB of data) and be relatively confident that unless you're being actively attacked, no denial of service is possible here. If you're really worried about denial of service, set your socket-level timeout much lower and your chunk size smaller (0.5s and 512 bytes) to ensure that you're regularly having control flow handed back to you.
The upshot of all this is that I believe that total timeouts are a misfeature in a library like this one. The best kind of timeout is one that is tuned to allow large responses enough time to download in peace, and such a timeout is best served by socket-level timeouts and `iter_content`.
| Lukasa | 2016-04-28T17:11:52Z | https://github.com/psf/requests/issues/3099#issuecomment-215498005 | 8 |
psf/requests | http | 2,336 | Timeout for connecting to proxy | I provided a bogus proxy, set a very short timeout and expected requests to fail very fast. Unfortunately the opposite happened. Is this behavior intentional? As far as I can tell there is no way to set a timeout for the proxy connection.
``` Python
time python -c "import requests; print requests.__version__; requests.get('https://google.com', timeout=0.01, proxies={'https':'https://1.1.1.1'})"
2.4.3
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 60, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 49, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 569, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 413, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='google.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', error(110, 'Connection timed out')))
real 2m7.350s
user 0m0.090s
sys 0m0.030s
```
| closed | 2014-11-13T12:10:06Z | 2021-09-08T17:05:23Z | https://github.com/psf/requests/issues/2336 | That's an interesting question. @kevinburke, got any idea?
| glaslos | 38 | That's an interesting question. @kevinburke, got any idea?
| Lukasa | 2014-11-13T15:53:37Z | https://github.com/psf/requests/issues/2336#issuecomment-62913117 | 0 |
psf/requests | http | 713 | Support for "Expects" http header | It would be nice to have support for the Expect -> 100 Continue flow. This is specially important when we are doing uploads, since we will not need to transfer the whole data before we encounter a 401, for example.
A typical use-case would be a streaming upload, with the data coming from a client and proxied on-the-fly to our destination server. Requests would read the data from the input source socket only if the destination server had already sent the 100-Continue header. When we are dealing with S3, 401 errors are common, and if the data is not read we can retry the request.
Requests can check the request headers, and if it finds an "Expect" header it will wait for the 100 Continue response from the server. If it does not come, this error should flow to the caller in a way that it can distinguish a network problem from a failed expectation error.
| closed | 2012-07-10T21:25:00Z | 2021-09-08T14:00:48Z | https://github.com/psf/requests/issues/713 | This would be wonderful.
I believe @durin42 has some thoughts about this as well.
| edevil | 38 | This would be wonderful.
I believe @durin42 has some thoughts about this as well.
| kennethreitz | 2012-07-10T21:44:55Z | https://github.com/psf/requests/issues/713#issuecomment-6890964 | 0 |
psf/requests | http | 3,748 | requests get(URL) never returns, even with Proxy! | 99% of the times I use .get() i get responses in ms... but there are certain URLs that I work on, both http and https, they are available globally too(but can't be shared for some reason). What I found is that some of these .get() never actually returns a value instead .get() command never ends, it keeps running infinitely. What could be the reason?Is there any alternate, such as using Proxy or anything as such? Please suggest. | closed | 2016-12-03T16:59:31Z | 2016-12-05T14:34:26Z | https://github.com/psf/requests/issues/3748 | @arunchandramouli have you considered setting a `timeout`?
Beyond that, I'd advise you to not ask questions on a defect (issue) tracker. Instead, ask questions on [StackOverflow](stackoverflow.com). | arunchandramouli | 37 | @arunchandramouli have you considered setting a `timeout`?
Beyond that, I'd advise you to not ask questions on a defect (issue) tracker. Instead, ask questions on [StackOverflow](stackoverflow.com). | sigmavirus24 | 2016-12-03T20:36:49Z | https://github.com/psf/requests/issues/3748#issuecomment-264663883 | 0 |
psf/requests | http | 3,701 | AttributeError: 'X509' object has no attribute '_x509' | While trying to use Slack API using slacker module that is based on Requests, I receive this SSL-related error:
```
Python 2.7.12 (default, Oct 11 2016, 05:24:00)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.38)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from slacker import Slacker
>>> s=Slacker('<SLACK BOT API KEY>')
>>> s.chat.post_message('#general', 'test')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/slacker/__init__.py", line 287, in post_message
'icon_emoji': icon_emoji
File "/usr/local/lib/python2.7/site-packages/slacker/__init__.py", line 71, in post
return self._request(requests.post, api, **kwargs)
File "/usr/local/lib/python2.7/site-packages/slacker/__init__.py", line 57, in _request
**kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 110, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 423, in send
timeout=timeout
File "/usr/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 594, in urlopen
chunked=chunked)
File "/usr/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 350, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 835, in _validate_conn
conn.connect()
File "/usr/local/lib/python2.7/site-packages/requests/packages/urllib3/connection.py", line 330, in connect
cert = self.sock.getpeercert()
File "/usr/local/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 324, in getpeercert
'subjectAltName': get_subj_alt_name(x509)
File "/usr/local/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 166, in get_subj_alt_name
cert = _Certificate(openssl_backend, peer_cert._x509)
AttributeError: 'X509' object has no attribute '_x509'
```
I see that other users experience this too: http://stackoverflow.com/questions/40628315/python-requests-and-streaming-attributeerror-x509-object-has-no-attribute
Downgrading from 2.12.1 to 2.11.1 removes the issue. | closed | 2016-11-17T12:54:24Z | 2021-09-03T00:10:46Z | https://github.com/psf/requests/issues/3701 | What version of PyOpenSSL do you have installed?
| sapran | 37 | What version of PyOpenSSL do you have installed?
| Lukasa | 2016-11-17T12:55:20Z | https://github.com/psf/requests/issues/3701#issuecomment-261239706 | 0 |
psf/requests | http | 2,949 | Session's Authorization header isn't sent on redirect | I'm using requests to hit developer-api.nest.com and setting an Authorization header with a bearer token. On some requests, that API responds with an 307 redirect. When that happens, I still need the Authorization header to be sent on the subsequent request. I've tried using `requests.get()` as well as a session.
I suppose I could work around this by not allowing redirects, detecting the 307 and then issuing the new request myself but I'm wondering if this is a bug. Should I expect that the Authorization header would be sent on all requests made within the context of a session?
``` python
In [41]: s = requests.Session()
In [42]: s.headers
Out[42]: {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'User-Agent': 'python-requests/2.7.0 CPython/3.4.3 Darwin/15.2.0'}
In [43]: s.headers['Authorization'] = "Bearer <snip>"
In [45]: s.get("https://developer-api.nest.com/devices/thermostats/")
Out[45]: <Response [401]>
In [46]: s.get("https://developer-api.nest.com/devices/thermostats/")
Out[46]: <Response [200]>
In [49]: Out[45].history
Out[49]: [<Response [307]>]
In [50]: Out[46].history
Out[50]: []
In [51]: Out[45].request.headers
Out[51]: {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'User-Agent': 'python-requests/2.7.0 CPython/3.4.3 Darwin/15.2.0'}
In [52]: Out[46].request.headers
Out[52]: {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'User-Agent': 'python-requests/2.7.0 CPython/3.4.3 Darwin/15.2.0', 'Authorization': 'Bearer <snip>'}
```
| closed | 2015-12-28T21:44:14Z | 2021-12-07T23:00:27Z | https://github.com/psf/requests/issues/2949 | There's two Nest-specific workarounds.
One is to pass the `auth` parameter with the access_token rather than using the Authorization header. I found this on https://gist.github.com/tylerdave/409ffa08e1d47b1a1e23
Another is to save a dictionary with the headers you'd use, don't follow redirects, and then make a second request passing in the headers again:
``` python
headers = {'Authorization': 'Bearer ' + access_token, 'Content-Type': 'application/json'}
initial_response = requests.get('https://developer-api.nest.com', headers=headers, allow_redirects=False)
if initial_response.status_code == 307:
api_response = requests.get(initial_response.headers['Location'], headers=headers, allow_redirects=False)
```
| jwineinger | 37 | There's two Nest-specific workarounds.
One is to pass the `auth` parameter with the access_token rather than using the Authorization header. I found this on https://gist.github.com/tylerdave/409ffa08e1d47b1a1e23
Another is to save a dictionary with the headers you'd use, don't follow redirects, and then make a second request passing in the headers again:
``` python
headers = {'Authorization': 'Bearer ' + access_token, 'Content-Type': 'application/json'}
initial_response = requests.get('https://developer-api.nest.com', headers=headers, allow_redirects=False)
if initial_response.status_code == 307:
api_response = requests.get(initial_response.headers['Location'], headers=headers, allow_redirects=False)
```
| technicalpickles | 2016-11-05T22:01:57Z | https://github.com/psf/requests/issues/2949#issuecomment-258644622 | 8 |
psf/requests | http | 2,039 | incomplete file upload | On Windows 7 64bit, uploading a file using requests via python-onedrive does not complete, to be more precise, only the first few bytes/kilobytes are uploaded.
Modified models.py for debugging as follows:
```
def prepare(self, method=None, url=None, headers=None, files=None,
data=None, params=None, auth=None, cookies=None, hooks=None):
"""Prepares the entire request with the given parameters."""
### ADDED FOR DEBUGGING
with open(r'C:\Python\requests_debug.log', 'ab') as log:
for k, file_tuple in files.viewitems():
log.write('BEFORE: Key: {}, File tuple: {}, position: {}\n'.format(k, file_tuple, file_tuple[1].tell()))
### END
self.prepare_method(method)
self.prepare_url(url, params)
self.prepare_headers(headers)
self.prepare_cookies(cookies)
self.prepare_body(data, files)
self.prepare_auth(auth, url)
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
# This MUST go after prepare_auth. Authenticators could add a hook
self.prepare_hooks(hooks)
### ADDED FOR DEBUGGING
with open(r'C:\Python\requests_debug.log', 'ab') as log:
for k, file_tuple in files.viewitems():
pos = file_tuple[1].tell()
file_tuple[1].seek(0, os.SEEK_END)
log.write('AFTER: Key: {}, position: {}, size: {}\n'.format(k, pos, file_tuple[1].tell()))
log.write('Body size: {}\n'.format(len(self.body)))
### END
```
Here's an example log:
BEFORE: Key: file, File tuple: (u'nv2-pc.zip', <open file u'nv2-pc.zip', mode 'r' at 0x028B4EE8>, u'application/octet-stream'), position: 0
AFTER: Key: file, position: 4784128, size: 4787658
Body size: 522
As you can see, the body size is much smaller than the size indicated in AFTER.
| closed | 2014-05-12T19:18:35Z | 2019-03-20T20:34:05Z | https://github.com/psf/requests/issues/2039 | Out of interest, once you've sent the request what's the value of `len(r.request.body)`?
| clem2 | 37 | Out of interest, once you've sent the request what's the value of `len(r.request.body)`?
| Lukasa | 2014-05-12T19:37:52Z | https://github.com/psf/requests/issues/2039#issuecomment-42877859 | 0 |
psf/requests | http | 1,457 | Returns empty with Google over HTTPS | ``` python
import requests
In [2]: r = requests.get('https://google.com')
In [3]: r.text
Out[3]: u''
In [4]: r = requests.get('http://google.com')
In [5]: r.text
Out[5]: u'<!doctype html><html itemscope="itemscope"...
```
| closed | 2013-07-13T08:40:53Z | 2021-09-09T02:11:46Z | https://github.com/psf/requests/issues/1457 | Facing similiar issue with Twitter too
``` python
In [13]: r = requests.get('https://twitter.com')
In [14]: r.text
Out[14]: u''
In [15]: r = requests.get('http://twitter.com')
In [16]: r.text
Out[16]: u''
In [17]: r.history
Out[17]: (<Response [301]>,)
In [18]: r.history[0].headers['location']
Out[18]: 'https://twitter.com/'
```
Because Twitter always redirects to https:// I am unable to retrieve any response from non-htttps twitter either.
| liquidscorpio | 37 | Facing similiar issue with Twitter too
``` python
In [13]: r = requests.get('https://twitter.com')
In [14]: r.text
Out[14]: u''
In [15]: r = requests.get('http://twitter.com')
In [16]: r.text
Out[16]: u''
In [17]: r.history
Out[17]: (<Response [301]>,)
In [18]: r.history[0].headers['location']
Out[18]: 'https://twitter.com/'
```
Because Twitter always redirects to https:// I am unable to retrieve any response from non-htttps twitter either.
| liquidscorpio | 2013-07-13T08:43:37Z | https://github.com/psf/requests/issues/1457#issuecomment-20916794 | 0 |
psf/requests | http | 4,043 | python requests runs infinitely if the called script executes more than 2 minutes | I am setting up a vps. Everything has worked well with python requests module until I had to run a script on the server that exceeded two minutes. As soon as any script executes through python requests on the remote server for more than 2 minutes, the request goes on infinitely. It never returns or terminates. If I call the script from any browser, it executes normally and returns the appropriate response IRRESPECTIVE OF THE DURATION. The only means of terminating any scripting lasting more than two minutes and executed through python requests is by including the timeout parameter in the requests call, which normally gives the following traceback:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 376, in _make_request
httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 378, in _make_request
httplib_response = conn.getresponse()
File "C:\Python34\lib\http\client.py", line 1148, in getresponse
response.begin()
File "C:\Python34\lib\http\client.py", line 352, in begin
version, status, reason = self._read_status()
File "C:\Python34\lib\http\client.py", line 314, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Python34\lib\socket.py", line 371, in readinto
return self._sock.recv_into(b)
File "C:\Python34\lib\ssl.py", line 708, in recv_into
return self.read(nbytes, buffer)
File "C:\Python34\lib\ssl.py", line 580, in read
v = self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\requests\adapters.py", line 376, in send
timeout=timeout
File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 609, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Python34\lib\site-packages\requests\packages\urllib3\util\retry.py", line 247, in increment
raise six.reraise(type(error), error, _stacktrace)
File "C:\Python34\lib\site-packages\requests\packages\urllib3\packages\six.py", line 310, in reraise
raise value
File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 559, in urlopen
body=body, headers=headers)
File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 380, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 308, in _raise_timeout
raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value)
requests.packages.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='domain1.com', port=443): Read timed out. (read timeout=130)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "all-test.py", line 32, in <module>
cool = requests.post("https://domain1.com/test3.php", stream=True, verify=False, timeout=130)
File "C:\Python34\lib\site-packages\requests\api.py", line 107, in post
return request('post', url, data=data, json=json, **kwargs)
File "C:\Python34\lib\site-packages\requests\api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Python34\lib\site-packages\requests\sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python34\lib\site-packages\requests\sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "C:\Python34\lib\site-packages\requests\adapters.py", line 449, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='domain1.com', port=443): Read timed out. (read timeout=130)
Note, everything works perfectly with no issue if called from browser.
Also, checking the access.log file on the remove server shows that the script completed at the right time with no issue and responded with 200 status code even when called by python requests that ran infinitely.
To eliminate some probable causes, i have set up and run similar script on my local wamp, calling the same script via python's requests with no issue.
Please, I will highly appreciate any help anybody can give me to resolve the above issue. It has taken me 4 days with no success.
| closed | 2017-05-21T09:59:50Z | 2021-09-08T10:00:42Z | https://github.com/psf/requests/issues/4043 | This sounds very much like it relates to the web server you're running. Can you try with a different web server? | jj-one | 36 | This sounds very much like it relates to the web server you're running. Can you try with a different web server? | Lukasa | 2017-05-21T15:43:00Z | https://github.com/psf/requests/issues/4043#issuecomment-302944592 | 0 |
psf/requests | http | 1,915 | TypeError: getresponse() got an unexpected keyword argument 'buffering' | Requests 2.2.1. Same thing happens in 1.2.3 (I upgraded from that).
I get this traceback:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 313, in _make_request
httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 480, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 315, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/lib/python3.3/http/client.py", line 1147, in getresponse
response.begin()
File "/usr/local/lib/python3.3/http/client.py", line 358, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.3/http/client.py", line 328, in _read_status
raise BadStatusLine(line)
http.client.BadStatusLine: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.3/site-packages/requests/adapters.py", line 330, in send
timeout=timeout
File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 530, in urlopen
raise MaxRetryError(self, url, e)
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='heimdallr.jcea.es', port=443): Max retries exceeded with url: /PANICO (Caused by <class 'http.client.BadStatusLine'>: '')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./heimdallr.py", line 203, in <module>
module.start()
File "__main__", line 59, in start
File "main", line 23, in start
File "panic_report", line 17, in envia_tb_pendiente
File "/usr/local/lib/python3.3/site-packages/requests/sessions.py", line 425, in post
return self.request('POST', url, data=data, **kwargs)
File "auth_http", line 48, in request
File "/usr/local/lib/python3.3/site-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.3/site-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.3/site-packages/requests/adapters.py", line 378, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='heimdallr.jcea.es', port=443): Max retries exceeded with url: /PANICO (Caused by <class 'http.client.BadStatusLine'>: '')
Makefile:69: recipe for target 'run' failed
make: *** [run] Error 1
```
| closed | 2014-02-13T03:00:34Z | 2017-03-16T09:35:23Z | https://github.com/psf/requests/issues/1915 | For my own reference: this is 100% reproductible in changeset "2e3cbe6aed98" in my "heimdallr" Mercurial project, when running on master Raspberry PI.
| jcea | 36 | For my own reference: this is 100% reproductible in changeset "2e3cbe6aed98" in my "heimdallr" Mercurial project, when running on master Raspberry PI.
| jcea | 2014-02-13T03:16:26Z | https://github.com/psf/requests/issues/1915#issuecomment-34944673 | 0 |
psf/requests | http | 1,081 | Generate multipart posts without a file | Currently, the only way to have a multipart form request is `r = requests.post(url, data=payload, files=files)`
which may have a component
```
--3eeaadbfda0441b8be821bbed2962e4d
Content-Disposition: form-data; name="file"; filename="filename.txt"
Content-Type: text/plain
content
--3eeaadbfda0441b8be821bbed2962e4d--
```
However, I run into instances where posts are required to be in a multipart format without an associated file, like:
```
--3eeaadbfda0441b8be821bbed2962e4d
Content-Disposition: form-data; name="key1"
value1
--3eeaadbfda0441b8be821bbed2962e4d
```
but the latter is impossible to generate without the former.
Perhaps we can add a flag like `r = requests.post(url, data=payload, multipart=True)` that forces a post to be multipart, even without a file.
I am happy to work on implementing this if it sounds like a good idea.
| closed | 2013-01-03T01:53:04Z | 2021-09-07T00:06:10Z | https://github.com/psf/requests/issues/1081 | This has been discussed before. It would represent a significant change to the API which I'm not sure @kennethreitz would like.
Personally, I would be more in favor of exposing a function to generate multipart data from dictioniaries (lists of tuples, etc) in the API so users can use that and just pass the generated data to requests. Ostensibly, if they're not using files, there shouldn't be a huge memory hit, but even so, they already have one huge string in memory, the second won't really kill them and it would be their fault, not ours.
Perhaps @kennethreitz would be more amenable to the second solution. I don't think it fits in with requests' design philosophy either and would be extraordinarily bizarre given the rest of the API, but _shrug_ who knows.
| ghost | 36 | This has been discussed before. It would represent a significant change to the API which I'm not sure @kennethreitz would like.
Personally, I would be more in favor of exposing a function to generate multipart data from dictioniaries (lists of tuples, etc) in the API so users can use that and just pass the generated data to requests. Ostensibly, if they're not using files, there shouldn't be a huge memory hit, but even so, they already have one huge string in memory, the second won't really kill them and it would be their fault, not ours.
Perhaps @kennethreitz would be more amenable to the second solution. I don't think it fits in with requests' design philosophy either and would be extraordinarily bizarre given the rest of the API, but _shrug_ who knows.
| sigmavirus24 | 2013-01-03T03:37:52Z | https://github.com/psf/requests/issues/1081#issuecomment-11834185 | 0 |
psf/requests | http | 1,967 | Running a single connection with pool_block == True, hangs the application | I have a problem where I'm trying to create a python script to test a single-threaded web server on an embedded device. I was having performance problems until I realized that I could only support one connection per session. When I made my own HTTP adapter to only use one connection with pool_block equal to True, one request got sent, then the script hung trying to send the next request.
If I comment out line 533 (the "if release_conn:") in request/package/urllib3/connectionpool.py, my problems go away. The script performances super duper fast as expected compared against a cURL bash script.
However, I'm not sure this is the proper fix for the problem, since I'm a special case. I need a way to configure this to run this single connection from the top-level api without getting blocked.
BTW, if I turn blocking off (pool_block == False), I get the same horrible performance. Basically, it has to do retries and slows down, because the web server can only handle one connection at a time.
| closed | 2014-03-19T18:41:56Z | 2021-09-09T00:01:11Z | https://github.com/psf/requests/issues/1967 | Are you reading the full response from the request?
| DoxaLogosGit | 35 | Are you reading the full response from the request?
| Lukasa | 2014-03-19T18:51:42Z | https://github.com/psf/requests/issues/1967#issuecomment-38091631 | 0 |
psf/requests | http | 1,133 | Pluggable Request objects? | It would be super cool if it were possible to plug a custom Request object over a Session object. However it looks as though the Request object is hardcoded into the request method, e.g. https://github.com/kennethreitz/requests/blob/1a87f15e6f8d22be6855424b46b80f66d178fe40/requests/sessions.py#L264
Any thoughts on this?
| closed | 2013-01-23T23:04:09Z | 2021-09-09T05:00:46Z | https://github.com/psf/requests/issues/1133 | Like changing the Session class's `__init__(self)` to `__init__(self, req=None)` then in the request method only create a new Request() if self.req is None?
| maxcountryman | 35 | Like changing the Session class's `__init__(self)` to `__init__(self, req=None)` then in the request method only create a new Request() if self.req is None?
| alanhamlett | 2013-01-24T08:46:49Z | https://github.com/psf/requests/issues/1133#issuecomment-12642206 | 0 |
psf/requests | http | 2,651 | Cannot make URL query string with a parameter without a value | URL query string may contain a parameter, which has no value i.e. http://host/path/?foo or http://host/path/?a=1&foo. Currently Requests does not provide support for that.
```
In [68]: d
Out[68]: {'a': 1, 'foo': None}
In [69]: tl
Out[69]: [('a', 1), ('foo',)]
In [70]: RequestEncodingMixin._encode_params(d)
Out[70]: 'a=1'
In [71]: RequestEncodingMixin._encode_params(tl)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-71-5d4dac855108> in <module>()
----> 1 RequestEncodingMixin._encode_params(tl)
/home/f557010/jpm/local/lib/python2.7/site-packages/requests/models.pyc in _encode_params(data)
87 elif hasattr(data, '__iter__'):
88 result = []
---> 89 for k, vs in to_key_val_list(data):
90 if isinstance(vs, basestring) or not hasattr(vs, '__iter__'):
91 vs = [vs]
ValueError: need more than 1 value to unpack
```
Expected:
```
'a=1&foo'
```
| closed | 2015-06-24T23:35:06Z | 2021-02-08T02:01:02Z | https://github.com/psf/requests/issues/2651 | I can see some value in this. For API reasons it could only ever work with the 'list of tuples' approach, but I'd be ok with us adding support for this. @sigmavirus24?
| agilevic | 34 | I can see some value in this. For API reasons it could only ever work with the 'list of tuples' approach, but I'd be ok with us adding support for this. @sigmavirus24?
| Lukasa | 2015-06-25T06:59:47Z | https://github.com/psf/requests/issues/2651#issuecomment-115131320 | 0 |
psf/requests | http | 557 | Seeing SSLError: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib | In our logs, I noticed this error when using requests to perform a POST to Facebook's Graph API using HTTPS:
```
File "/home/api/api/lib/python2.7/site-packages/requests/api.py", line 85, in post
return request('post', url, data=data, **kwargs)
File "/home/api/api/lib/python2.7/site-packages/requests/api.py", line 40, in request
return s.request(method=method, url=url, **kwargs)
File "/home/api/api/lib/python2.7/site-packages/requests/sessions.py", line 208, in request
r.send(prefetch=prefetch)
File "/home/api/api/lib/python2.7/site-packages/requests/models.py", line 584, in send
raise SSLError(e)
SSLError: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib
```
This is the first time I've seen it... could it be at all related to the cacerts file included in one of the dependencies of requests?
Is it related to this at all? https://github.com/kennethreitz/requests/issues/30
| closed | 2012-04-19T20:18:29Z | 2014-10-15T08:10:17Z | https://github.com/psf/requests/issues/557 | Do you have `certifi` installed?
| stantonk | 34 | Do you have `certifi` installed?
| kennethreitz | 2012-04-19T20:19:04Z | https://github.com/psf/requests/issues/557#issuecomment-5231020 | 0 |
psf/requests | http | 3,212 | SSL Error: bad handshake | I could not use your lib in CeontOS 7 with Python 2.7.5. I got this error:
```
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 447, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",)
```
Updating of Python or any SSL libs didn't help. I've got this error in CentOS and Ubuntu, in Arch Linux everything works well.
| closed | 2016-05-20T17:20:55Z | 2021-09-08T03:00:36Z | https://github.com/psf/requests/issues/3212 | Aha, ok, we got there.
`api.smartsheet.com` serves its TLS using what's known as a "cross-signed certificate". This was used because Verisign, the CA for `api.smartsheet.com`, originally used a 1024-bit root certificate. These were deprecated and replaced by stronger root certificates, but some older browsers and systems may not have received updates, so sites like `api.smartsheet.com` serve a root certificate that is signed by the 1024-bit root.
That's not normally a problem, _except_:
- `certifi` removed the weak 1024-bit roots
- OpenSSL older than 1.0.2 sucks at building cert chains, and so fails to correctly validate the cross-signed root.
You can solve this in two ways. The first, better but more drastic way, is to upgrade your OpenSSL to 1.0.2 or later. This is hard to do on Centos, I'm afraid. The less good but more effective way is to get the output of running `python -c "import certifi; print certifi.old_where()"` and then set the `REQUESTS_CA_BUNDLE` environment variable to the printed path.
| pensnarik | 33 | Aha, ok, we got there.
`api.smartsheet.com` serves its TLS using what's known as a "cross-signed certificate". This was used because Verisign, the CA for `api.smartsheet.com`, originally used a 1024-bit root certificate. These were deprecated and replaced by stronger root certificates, but some older browsers and systems may not have received updates, so sites like `api.smartsheet.com` serve a root certificate that is signed by the 1024-bit root.
That's not normally a problem, _except_:
- `certifi` removed the weak 1024-bit roots
- OpenSSL older than 1.0.2 sucks at building cert chains, and so fails to correctly validate the cross-signed root.
You can solve this in two ways. The first, better but more drastic way, is to upgrade your OpenSSL to 1.0.2 or later. This is hard to do on Centos, I'm afraid. The less good but more effective way is to get the output of running `python -c "import certifi; print certifi.old_where()"` and then set the `REQUESTS_CA_BUNDLE` environment variable to the printed path.
| Lukasa | 2016-05-23T15:53:24Z | https://github.com/psf/requests/issues/3212#issuecomment-221016311 | 17 |
psf/requests | http | 2,717 | "OverflowError: string longer than 2147483647 bytes" when trying requests.put | Hi,
I'm trying to upload a file that weight about 3GB and I'm getting the following error:
"OverflowError: string longer than 2147483647 bytes"
If I understand correctly it seems like there's a 2GB limit? didnt manage to find any reference to such limiation or how to bypass it (if possible).
The code i'm using is:
``` python
datafile = 'someHugeFile'
with open(datafile, 'rb') as myfile:
args = myfile.read()
resp = requests.put(url, data=args, verify=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/api.py", line 99, in put
return request('put', url, data=data, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/adapters.py", line 327, in send
timeout=timeout
File "/usr/local/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/packages/urllib3/connectionpool.py", line 493, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/site-packages/requests-2.3.0-py2.7.egg/requests/packages/urllib3/connectionpool.py", line 291, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python2.7/httplib.py", line 995, in request
self._send_request(method, url, body, headers)
File "/usr/local/lib/python2.7/httplib.py", line 1029, in _send_request
self.endheaders(body)
File "/usr/local/lib/python2.7/httplib.py", line 991, in endheaders
self._send_output(message_body)
File "/usr/local/lib/python2.7/httplib.py", line 844, in _send_output
self.send(msg)
File "/usr/local/lib/python2.7/httplib.py", line 820, in send
self.sock.sendall(data)
File "/usr/local/lib/python2.7/ssl.py", line 234, in sendall
v = self.send(data[count:])
File "/usr/local/lib/python2.7/ssl.py", line 203, in send
v = self._sslobj.write(data)
OverflowError: string longer than 2147483647 bytes
```
For smaller files this code works fine for me.
| closed | 2015-08-12T09:49:47Z | 2020-11-10T14:53:32Z | https://github.com/psf/requests/issues/2717 | Rather than reading the entire file and sending it across in a single request, would it be possible for you to use chunked transfer encoding? http://docs.python-requests.org/en/latest/user/advanced/#chunk-encoded-requests
| EB123 | 33 | Rather than reading the entire file and sending it across in a single request, would it be possible for you to use chunked transfer encoding? http://docs.python-requests.org/en/latest/user/advanced/#chunk-encoded-requests
| demianbrecht | 2015-08-12T14:00:40Z | https://github.com/psf/requests/issues/2717#issuecomment-130315063 | 0 |
psf/requests | http | 2,519 | Client Certificates w/Passphrases? | Hi,
client certificates can be specified with the "cert" parameter. If I pass an encrypted client certificate, the underlying OpenSSL call will query for the corresponding passphrase, but that is not really a feasible way of handling this for a larger session with multiple calls, because the password is obviously not cached in any way.
Is there any way to pass the passphrase to a connection with API calls so it is then passed on to OpenSSL?
Cheers,
Toby.
| closed | 2015-03-30T15:55:43Z | 2024-05-20T14:36:24Z | https://github.com/psf/requests/issues/2519 | Here's an alternative. Re-encode the cert to not require a passphrase.
Run this script passing the .p12 as an argument. It'll prompt for the passphrase and generate a .pem that doesn't need one.
Then it asks for a new passphrase (twice, for validation) that's used to build a new .p12. You can leave it blank and the result is a .p12 and .pem that don't require passphrases.
```
#!/bin/bash -e
np_pem=${1/.p12/_np.pem}
np_p12=${np_pem/pem/p12}
openssl pkcs12 -in $1 -nodes -out $np_pem
echo "Press <CR> twice here for empty passphrase"
openssl pkcs12 -export -in $np_pem -out $np_p12
``` | tdussa | 33 | Here's an alternative. Re-encode the cert to not require a passphrase.
Run this script passing the .p12 as an argument. It'll prompt for the passphrase and generate a .pem that doesn't need one.
Then it asks for a new passphrase (twice, for validation) that's used to build a new .p12. You can leave it blank and the result is a .p12 and .pem that don't require passphrases.
```
#!/bin/bash -e
np_pem=${1/.p12/_np.pem}
np_p12=${np_pem/pem/p12}
openssl pkcs12 -in $1 -nodes -out $np_pem
echo "Press <CR> twice here for empty passphrase"
openssl pkcs12 -export -in $np_pem -out $np_p12
``` | bedge | 2018-05-23T17:35:02Z | https://github.com/psf/requests/issues/2519#issuecomment-391434308 | 1 |
psf/requests | http | 2,406 | Unusable `requests` in main and forked processes (`rq` worker script) | This is a follow up from https://github.com/kennethreitz/requests/issues/2399#issuecomment-69675695
As stated there, this is a problem which has to do with `requests` being used in both a main process and forked processes -- the `rq` worker script. If the network goes down even for a short while, `requests` raises few exceptions (ConnectionError) then it becomes unusable and the forked processes get killed instantly if they try to use it (requests.get). The issue is described on the `rq` side as well: https://github.com/nvie/rq/issues/473
Sorry if this not very clear, tired now and I'm struggling with this issue for few weeks already...
I've put up a gist to reproduce it (https://gist.github.com/ducu/ee8c0b1028775df6c72e), but please let me know if I can help. Thanks a lot for your support, cheers
| closed | 2015-01-13T01:52:32Z | 2021-09-08T15:00:53Z | https://github.com/psf/requests/issues/2406 | Does requests misbehave if you pull the link down without forking? That is, if you have a single ordinary script, no forking or workers, using requests, does it hang if you run `ifconfig en0 down` in the same way?
| ducu | 33 | Does requests misbehave if you pull the link down without forking? That is, if you have a single ordinary script, no forking or workers, using requests, does it hang if you run `ifconfig en0 down` in the same way?
| Lukasa | 2015-01-13T07:28:28Z | https://github.com/psf/requests/issues/2406#issuecomment-69705464 | 0 |
psf/requests | http | 5,003 | POST request works in Postman/cURL, but not in Requests | I have a POST request that works perfectly with both Postman an cURL (it returns a JSON blob of data). However, when I perform the exact same request with Python's Requests library, I get a 200 success response, but instead of my JSON blob, I get this:
<html>
<head>
<META NAME="robots" CONTENT="noindex,nofollow">
<script src="/_Incapsula_Resource?SWJIYLWA=5074a744e2e3d891814e9a2dace20bd4,719d34d31c8e3a6e6fffd425f7e032f3">
</script>
<body>
</body></html>
I've used HTTP request bins to verify that the request (headers and payload) from Postman/cURL is *exactly the same* as the one from Python Requests.
Here is my Postman request in cURL:
curl -X POST \
https:/someurl/bla/bla \
-H 'Content-Type: application/json' \
-H 'Referer: https://www.host.com/bla/bla/' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:65.0) Gecko/20100101 Firefox/65.0' \
-H 'cache-control: no-cache' \
-d '{"json1":"blabla","etc":"etc"}'
...and here is my Python code:
payload = {
"json1": "blabla",
"etc": "etc",
}
headers = {
'Host': 'www.host.com',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip, deflate, br',
'Referer': 'https://www.host.com/bla/bla/',
'Content-Type':'application/json',
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
'Origin': 'https://www.host.com',
}
s = requests.Session()
response_raw = s.post(url, json=payload, headers=headers)
print(response_raw)
print(response_raw.text)
I have verified that the payload and headers are correct and valid. I don't think it's a cookies or redirect issue, since I've disabled both of those params with Postman/cURL and everything still works fine. I'm stymied how the host server is somehow able to tell the difference between two seemingly identical HTTP requests...
Any help would be much appreciated; thanks!
| closed | 2019-02-27T00:35:02Z | 2021-11-26T04:00:30Z | https://github.com/psf/requests/issues/5003 | This weird behavior has been reproduced. See [comment thread on StackOverfow](https://stackoverflow.com/questions/54878769/post-request-works-in-postman-but-not-in-python-requests-200-response-with-rob?noredirect=1#comment96632705_54878769). | mukundt | 32 | This weird behavior has been reproduced. See [comment thread on StackOverfow](https://stackoverflow.com/questions/54878769/post-request-works-in-postman-but-not-in-python-requests-200-response-with-rob?noredirect=1#comment96632705_54878769). | mukundt | 2019-02-28T22:37:21Z | https://github.com/psf/requests/issues/5003#issuecomment-468467213 | 5 |
psf/requests | http | 4,244 | ("bad handshake: SysCallError(-1, 'Unexpected EOF')",) despite using verify=False | Summary.
I am trying to make a request to a private api with a private api with an expired certificate that I do not control.
I am attempting to use verify=False in the request, but continue to get a
("bad handshake: SysCallError(-1, 'Unexpected EOF')",) error.
I have tried using the old 2.11 cipher string, but I still cannot complete the request. I have tried creating a custom adapter as detailed here: https://lukasa.co.uk/2017/02/Configuring_TLS_With_Requests/
I am able to recreate the request locally and inside the container that holds my application, but I am not able to make the request within my application without the bad handshake.
## Expected Result
Get the same response as I am able to get with cURL/postman/etc.
## Actual Result
("bad handshake: SysCallError(-1, 'Unexpected EOF')",)
## Reproduction Steps
```python
import requests
requests.post(url, data=data, headers=headers, verify=False)
```
Have also tried making get requests here without data just to test, they also fail
## System Information
$ python -m requests.help
```
{
"chardet": {
"version": "3.0.4"
},
"cryptography": {
"version": "2.0.3"
},
"idna": {
"version": ""
},
"implementation": {
"name": "CPython",
"version": "3.5.2"
},
"platform": {
"release": "4.9.31-moby",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "1010006f",
"version": "16.2.0"
},
"requests": {
"version": "2.18.3"
},
"system_ssl": {
"version": "1000207f"
},
"urllib3": {
"version": "1.21.1"
},
"using_pyopenssl": true
}
```
This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). | closed | 2017-08-15T21:09:26Z | 2018-09-28T12:56:07Z | https://github.com/psf/requests/issues/4244 | `verify=False` only prevents us from validating the certificate. Based on this error we aren't getting that far: the server is shutting down the connection. Is the server in question publicly reachable? | adamwilbert | 32 | `verify=False` only prevents us from validating the certificate. Based on this error we aren't getting that far: the server is shutting down the connection. Is the server in question publicly reachable? | Lukasa | 2017-08-16T07:45:49Z | https://github.com/psf/requests/issues/4244#issuecomment-322692730 | 0 |
psf/requests | http | 2,950 | Unable to upload chunks to Flask | I do not know if it comes from flask or requests , I link this issue posted on Flask issues
https://github.com/mitsuhiko/flask/issues/1668
| closed | 2015-12-29T08:46:10Z | 2015-12-31T14:26:03Z | https://github.com/psf/requests/issues/2950 | Are you sure the headers you posted in the Flask issue are right? They look totally invalid.
| johaven | 32 | Are you sure the headers you posted in the Flask issue are right? They look totally invalid.
| Lukasa | 2015-12-29T09:42:40Z | https://github.com/psf/requests/issues/2950#issuecomment-167757184 | 0 |
psf/requests | http | 2,117 | Multipart files unicode filename | Starting from 2.0 requests does not send `filename` attribute of `Content-Disposition` header for multipart files with unicode names. Instead of this attribute with name `filename*` is sent.
```
# requests 1.2.3
>>> requests.post('http://ya.ru', files={'file': (u'файл', '123')}).request.body
'--db7a9522a6344e26a4ca2933aecad887\r\nContent-Disposition: form-data; name="file"; filename="\xd1\x84\xd0\xb0\xd0\xb9\xd0\xbb"\r\nContent-Type: application/octet-stream\r\n\r\n123\r\n--db7a9522a6344e26a4ca2933aecad887--\r\n'
# requests 2.0
>>> requests.post('http://ya.ru', files={'file': (u'файл', '123')}).request.body
'--a9f0de2871da46df86140bc5b72fc722\r\nContent-Disposition: form-data; name="file"; filename*=utf-8\'\'%D1%84%D0%B0%D0%B9%D0%BB\r\n\r\n123\r\n--a9f0de2871da46df86140bc5b72fc722--\r\n'
```
And this is a big problem, because looks like some systems does not recognize such fields as files. At least we encountered a problem with Django. Django places entire file's content in `request.POST` instead of `request.FILES`. It is clear from sources:
https://github.com/django/django/blob/1.7c1/django/http/multipartparser.py#L599-L601
| closed | 2014-07-02T15:53:05Z | 2014-09-10T18:37:56Z | https://github.com/psf/requests/issues/2117 | This is actually a bug in Django then. Using this syntax is what we're supposed to be using. We have to indicate to the server that we're sending a field whose content is not ASCII or Latin-1 (ISO-8859-1). The proper way to do so is the syntax you see there. It is defined in [Section 3.2.1 of RFC 5987](http://tools.ietf.org/html/rfc5987#section-3.2.1). This bug should be filed against Django instead for not conforming to the proper handling of that value. **Edit** Note specifically that `parameter` is redefined as `reg-parameter` _or_ `ext-parameter` where `ext-parameter` is defined as `parmname` (e.g., `filename`) concatenated with a `*` character followed by = and the `ext-value`. This confirms that this is the proper handling of those header values. I believe this RFC also is applied to MIME header values which are what you're using in your `multipart/form-data` upload.
| homm | 32 | This is actually a bug in Django then. Using this syntax is what we're supposed to be using. We have to indicate to the server that we're sending a field whose content is not ASCII or Latin-1 (ISO-8859-1). The proper way to do so is the syntax you see there. It is defined in [Section 3.2.1 of RFC 5987](http://tools.ietf.org/html/rfc5987#section-3.2.1). This bug should be filed against Django instead for not conforming to the proper handling of that value. **Edit** Note specifically that `parameter` is redefined as `reg-parameter` _or_ `ext-parameter` where `ext-parameter` is defined as `parmname` (e.g., `filename`) concatenated with a `*` character followed by = and the `ext-value`. This confirms that this is the proper handling of those header values. I believe this RFC also is applied to MIME header values which are what you're using in your `multipart/form-data` upload.
| sigmavirus24 | 2014-07-02T15:57:57Z | https://github.com/psf/requests/issues/2117#issuecomment-47795473 | 0 |
psf/requests | http | 2,008 | urllib3 has been updated to new version 1.8.1, "source_address" is supported. May the requests lib support it? | New version urllib3 has been updated to 1.8.1 which support the "source_address"(in Python v2.7). Could you do something change to support it. It's really useful and needed. I will appreciate it very much.
| closed | 2014-04-18T06:07:16Z | 2018-03-23T02:08:42Z | https://github.com/psf/requests/issues/2008 | Thanks for raising this issue!
Requests doesn't plan to add this to the main API, it's simply not commonly used enough to justify the increased complexity. My recommendation is that you use our [Transport Adapter abstraction](http://docs.python-requests.org/en/latest/user/advanced/#transport-adapters) to provide the value. The example adapter that I linked to should provide enough of an example to demonstrate how this would work, but let me know if it didn't and I'll demonstrate.
| bofortitude | 32 | Thanks for raising this issue!
Requests doesn't plan to add this to the main API, it's simply not commonly used enough to justify the increased complexity. My recommendation is that you use our [Transport Adapter abstraction](http://docs.python-requests.org/en/latest/user/advanced/#transport-adapters) to provide the value. The example adapter that I linked to should provide enough of an example to demonstrate how this would work, but let me know if it didn't and I'll demonstrate.
| Lukasa | 2014-04-18T07:11:30Z | https://github.com/psf/requests/issues/2008#issuecomment-40791533 | 0 |
psf/requests | http | 910 | POST with json data and oauth auth_header signature_type not working | The following code does not generate an Authorization header.
``` python
import requests
from requests.auth import OAuth1
import json
client_key = u'mykey'
client_secret = u'mysecret'
headeroauth = OAuth1(
client_key,
client_secret,
None,
None,
signature_type='auth_header'
)
payload = {
"type": "command_line",
"params": {
"command": "sleep 0"
}
}
url = u'http://localhost:8003/test'
data = json.dumps(payload, sort_keys=True, indent=4)
r = requests.post(url, auth=headeroauth, data=data )
```
If I replace
``` python
data=data
```
by
``` python
data={'data':'payload'}
```
I get and Authorization header but that's not what I need...
Am I missing anything? This seem to be broken.
| closed | 2012-10-25T17:29:41Z | 2021-09-09T04:00:40Z | https://github.com/psf/requests/issues/910 | Nowhere in requests does `data` mean `json data` (at least from reading the code). If it's in the documentation then that needs to be corrected, but there is no section I know of where what you expect is promised.
Granted, I would support a separate parameter `json` that would guarantee encoding of the data as json data, but that is likely outside of the scope of requests.
| alex-ethier | 32 | Nowhere in requests does `data` mean `json data` (at least from reading the code). If it's in the documentation then that needs to be corrected, but there is no section I know of where what you expect is promised.
Granted, I would support a separate parameter `json` that would guarantee encoding of the data as json data, but that is likely outside of the scope of requests.
| sigmavirus24 | 2012-10-25T18:06:54Z | https://github.com/psf/requests/issues/910#issuecomment-9787736 | 0 |
psf/requests | http | 30 | HTTPS Cert Checking | `Response.raise_for_status()`
| closed | 2011-05-13T13:31:24Z | 2021-09-09T10:00:40Z | https://github.com/psf/requests/issues/30 | there are some patches/workarounds: http://stackoverflow.com/questions/1875052/using-paired-certificates-with-urllib2
| kennethreitz | 30 | there are some patches/workarounds: http://stackoverflow.com/questions/1875052/using-paired-certificates-with-urllib2
| kennethreitz | 2011-05-13T13:33:05Z | https://github.com/psf/requests/issues/30#issuecomment-1152824 | 0 |
psf/requests | http | 6,443 | Latest release of requests causes urllib3 to throw an error | A previously working lambda function started throwing this error.
{
"errorMessage": "Unable to import module 'app': cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/var/task/urllib3/util/ssl_.py)",
"errorType": "Runtime.ImportModuleError",
"requestId": "30bf7245-a58f-4192-81e3-5d122fb31d11",
"stackTrace": []
}
While trying to debug the issue, I narrowed it down to being a problem with requests (new release yesterday 5/3/2023). I then tried referencing the prior release which fixed the problem and my function worked as expected again.
To reproduce the error.
requirements,txt > requests- uses the latest release (2.30.0) and causes the lambda function to throw the error above.
requirements,txt > requests==2.29.0 - uses the prior release (2.29.0). With this release the error above no longer occurs.
| closed | 2023-05-04T22:27:39Z | 2023-08-12T19:35:52Z | https://github.com/psf/requests/issues/6443 | Hi @Rach81,
This is unrelated to Requests. You're installing a version of urllib3 that's incompatible with the version of Boto3 bundled in your runtime. You'll either need to pin your urllib3 dependency to `urllib3<2` or rely on the one provided by Lambda. | Rach81 | 29 | Hi @Rach81,
This is unrelated to Requests. You're installing a version of urllib3 that's incompatible with the version of Boto3 bundled in your runtime. You'll either need to pin your urllib3 dependency to `urllib3<2` or rely on the one provided by Lambda. | nateprewitt | 2023-05-04T22:32:18Z | https://github.com/psf/requests/issues/6443#issuecomment-1535490465 | 1 |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 83