problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_56182 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1562 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
prompt.read_user_dict() is broken due to click upgrade from 7.1.2 to 8.0.0
* Cookiecutter version: 1.7.3
* Template project url: -
* Python version: 3.9.5
* Operating System: macOS Catalina 10.15.7
### Description:
Apparently, there is a breaking change in `click==8.0.0` affecting dictionary values in cookiecutter.json
cookiecutter.json example:
```json
{
"project_name": "",
"project_policy": {"project_policy_example": "yes"}
}
```
```
% python -m cookiecutter ../Projects/project-configs
devplatform_project_name [infra-dev]:
project_name []: t
project_policy [default]:
Error: Unable to decode to JSON.
```
Looking closer at the cookiecutter.promt, I can see that in `read_user_dict()`, click passes `user_value='default'` to `process_json()`, instead of passing an actual default value from the cookiecutter.json as it was in `click 7.1.2`.
Link to the `process_json()` code: https://github.com/cookiecutter/cookiecutter/blob/master/cookiecutter/prompt.py#L81

As far as I can suppose, that issue could have been introduced by this PR https://github.com/pallets/click/pull/1517/
### Quick local fix
Install click first and specify version older than 8.0.0
```
pip install click==7.1.2
pip install cookiecutter
```
### Quick fix for cookiecutter library
in `setup.py` replace 'click>=7.0' with `'click>=7,<8.0.0'`
### What I've run:
```shell
% python3.9 -m venv test39
% source test39/bin/activate
% python -V
Python 3.9.5
% python -m pip install click==7.1.2
Collecting click==7.1.2
Using cached click-7.1.2-py2.py3-none-any.whl (82 kB)
Installing collected packages: click
Successfully installed click-7.1.2
(test39) ro.solyanik@macbook-ro Environments % python -m pip install cookiecutter
Collecting cookiecutter
Using cached cookiecutter-1.7.3-py2.py3-none-any.whl (34 kB)
Collecting six>=1.10
................................................
Installing collected packages: six, python-dateutil, MarkupSafe, urllib3, text-unidecode, Jinja2, idna, chardet, certifi, arrow, requests, python-slugify, poyo, jinja2-time, binaryornot, cookiecutter
Successfully installed Jinja2-3.0.1 MarkupSafe-2.0.1 arrow-1.1.0 binaryornot-0.4.4 certifi-2020.12.5 chardet-4.0.0 cookiecutter-1.7.3 idna-2.10 jinja2-time-0.2.0 poyo-0.5.0 python-dateutil-2.8.1 python-slugify-5.0.2 requests-2.25.1 six-1.16.0 text-unidecode-1.3 urllib3-1.26.4
% python -m cookiecutter ../Projects/project-configs
project_name []: t
project_policy [default]:
% ls t
Makefile README.md t tests
% rm -rf t
% python -m pip install click==8.0.0
Collecting click==8.0.0
Using cached click-8.0.0-py3-none-any.whl (96 kB)
Installing collected packages: click
Attempting uninstall: click
Found existing installation: click 7.1.2
Uninstalling click-7.1.2:
Successfully uninstalled click-7.1.2
Successfully installed click-8.0.0
% python -m cookiecutter ../Projects/project-configs
devplatform_project_name [infra-dev]:
project_name []: t
project_policy [default]:
Error: Unable to decode to JSON.
project_policy [default]:
Error: Unable to decode to JSON.
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 """cookiecutter distutils configuration."""
3 from setuptools import setup
4
5 version = "2.0.0"
6
7 with open('README.md', encoding='utf-8') as readme_file:
8 readme = readme_file.read()
9
10 requirements = [
11 'binaryornot>=0.4.4',
12 'Jinja2>=2.7,<4.0.0',
13 'click>=7.0',
14 'pyyaml>=5.3.1',
15 'jinja2-time>=0.2.0',
16 'python-slugify>=4.0.0',
17 'requests>=2.23.0',
18 ]
19
20 setup(
21 name='cookiecutter',
22 version=version,
23 description=(
24 'A command-line utility that creates projects from project '
25 'templates, e.g. creating a Python package project from a '
26 'Python package project template.'
27 ),
28 long_description=readme,
29 long_description_content_type='text/markdown',
30 author='Audrey Feldroy',
31 author_email='[email protected]',
32 url='https://github.com/cookiecutter/cookiecutter',
33 packages=['cookiecutter'],
34 package_dir={'cookiecutter': 'cookiecutter'},
35 entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},
36 include_package_data=True,
37 python_requires='>=3.6',
38 install_requires=requirements,
39 license='BSD',
40 zip_safe=False,
41 classifiers=[
42 "Development Status :: 5 - Production/Stable",
43 "Environment :: Console",
44 "Intended Audience :: Developers",
45 "Natural Language :: English",
46 "License :: OSI Approved :: BSD License",
47 "Programming Language :: Python :: 3 :: Only",
48 "Programming Language :: Python :: 3",
49 "Programming Language :: Python :: 3.6",
50 "Programming Language :: Python :: 3.7",
51 "Programming Language :: Python :: 3.8",
52 "Programming Language :: Python :: 3.9",
53 "Programming Language :: Python :: Implementation :: CPython",
54 "Programming Language :: Python :: Implementation :: PyPy",
55 "Programming Language :: Python",
56 "Topic :: Software Development",
57 ],
58 keywords=[
59 "cookiecutter",
60 "Python",
61 "projects",
62 "project templates",
63 "Jinja2",
64 "skeleton",
65 "scaffolding",
66 "project directory",
67 "package",
68 "packaging",
69 ],
70 )
71
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
requirements = [
'binaryornot>=0.4.4',
'Jinja2>=2.7,<4.0.0',
- 'click>=7.0',
+ 'click>=7.0,<8.0.0',
'pyyaml>=5.3.1',
'jinja2-time>=0.2.0',
'python-slugify>=4.0.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n requirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n- 'click>=7.0',\n+ 'click>=7.0,<8.0.0',\n 'pyyaml>=5.3.1',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n", "issue": "prompt.read_user_dict() is broken due to click upgrade from 7.1.2 to 8.0.0\n* Cookiecutter version: 1.7.3\r\n* Template project url: -\r\n* Python version: 3.9.5\r\n* Operating System: macOS Catalina 10.15.7\r\n\r\n### Description:\r\n\r\nApparently, there is a breaking change in `click==8.0.0` affecting dictionary values in cookiecutter.json\r\ncookiecutter.json example:\r\n```json\r\n{\r\n \"project_name\": \"\",\r\n \"project_policy\": {\"project_policy_example\": \"yes\"}\r\n}\r\n```\r\n \r\n```\r\n% python -m cookiecutter ../Projects/project-configs\r\ndevplatform_project_name [infra-dev]: \r\nproject_name []: t\r\nproject_policy [default]: \r\nError: Unable to decode to JSON.\r\n```\r\n\r\nLooking closer at the cookiecutter.promt, I can see that in `read_user_dict()`, click passes `user_value='default'` to `process_json()`, instead of passing an actual default value from the cookiecutter.json as it was in `click 7.1.2`. \r\nLink to the `process_json()` code: https://github.com/cookiecutter/cookiecutter/blob/master/cookiecutter/prompt.py#L81\r\n\r\n\r\nAs far as I can suppose, that issue could have been introduced by this PR https://github.com/pallets/click/pull/1517/\r\n\r\n### Quick local fix\r\nInstall click first and specify version older than 8.0.0\r\n```\r\npip install click==7.1.2\r\npip install cookiecutter\r\n```\r\n\r\n### Quick fix for cookiecutter library\r\nin `setup.py` replace 'click>=7.0' with `'click>=7,<8.0.0'`\r\n\r\n### What I've run:\r\n\r\n```shell\r\n% python3.9 -m venv test39 \r\n \r\n% source test39/bin/activate\r\n\r\n% python -V\r\nPython 3.9.5\r\n\r\n\r\n% python -m pip install click==7.1.2\r\nCollecting click==7.1.2\r\n Using cached click-7.1.2-py2.py3-none-any.whl (82 kB)\r\nInstalling collected packages: click\r\nSuccessfully installed click-7.1.2\r\n(test39) ro.solyanik@macbook-ro Environments % python -m pip install cookiecutter\r\nCollecting cookiecutter\r\n Using cached cookiecutter-1.7.3-py2.py3-none-any.whl (34 kB)\r\nCollecting six>=1.10\r\n................................................\r\nInstalling collected packages: six, python-dateutil, MarkupSafe, urllib3, text-unidecode, Jinja2, idna, chardet, certifi, arrow, requests, python-slugify, poyo, jinja2-time, binaryornot, cookiecutter\r\nSuccessfully installed Jinja2-3.0.1 MarkupSafe-2.0.1 arrow-1.1.0 binaryornot-0.4.4 certifi-2020.12.5 chardet-4.0.0 cookiecutter-1.7.3 idna-2.10 jinja2-time-0.2.0 poyo-0.5.0 python-dateutil-2.8.1 python-slugify-5.0.2 requests-2.25.1 six-1.16.0 text-unidecode-1.3 urllib3-1.26.4\r\n\r\n% python -m cookiecutter ../Projects/project-configs\r\nproject_name []: t\r\nproject_policy [default]: \r\n\r\n% ls t \r\nMakefile README.md t tests\r\n\r\n% rm -rf t\r\n\r\n% python -m pip install click==8.0.0 \r\nCollecting click==8.0.0\r\n Using cached click-8.0.0-py3-none-any.whl (96 kB)\r\nInstalling collected packages: click\r\n Attempting uninstall: click\r\n Found existing installation: click 7.1.2\r\n Uninstalling click-7.1.2:\r\n Successfully uninstalled click-7.1.2\r\nSuccessfully installed click-8.0.0\r\n\r\n% python -m cookiecutter ../Projects/project-configs\r\ndevplatform_project_name [infra-dev]: \r\nproject_name []: t\r\nproject_policy [default]: \r\nError: Unable to decode to JSON.\r\nproject_policy [default]: \r\nError: Unable to decode to JSON.\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom setuptools import setup\n\nversion = \"2.0.0\"\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0',\n 'pyyaml>=5.3.1',\n 'jinja2-time>=0.2.0',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.6',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}]} | 2,233 | 124 |
gh_patches_debug_3807 | rasdani/github-patches | git_diff | quantumlib__Cirq-3574 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs build is failing
Since the black formatter merge the RTD builds are failing with some weird pip error:
https://readthedocs.org/projects/cirq/builds/
Need to look into it and resolve it if the error is on our end or report it to the RTD team if it's on their end.
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17 from setuptools import find_packages, setup
18
19 # This reads the __version__ variable from cirq/_version.py
20 __version__ = ''
21 exec(open('cirq/_version.py').read())
22
23 name = 'cirq'
24
25 description = (
26 'A framework for creating, editing, and invoking '
27 'Noisy Intermediate Scale Quantum (NISQ) circuits.'
28 )
29
30 # README file as long_description.
31 long_description = io.open('README.rst', encoding='utf-8').read()
32
33 # If CIRQ_PRE_RELEASE_VERSION is set then we update the version to this value.
34 # It is assumed that it ends with one of `.devN`, `.aN`, `.bN`, `.rcN` and hence
35 # it will be a pre-release version on PyPi. See
36 # https://packaging.python.org/guides/distributing-packages-using-setuptools/#pre-release-versioning
37 # for more details.
38 if 'CIRQ_PRE_RELEASE_VERSION' in os.environ:
39 __version__ = os.environ['CIRQ_PRE_RELEASE_VERSION']
40 long_description = (
41 "**This is a development version of Cirq and may be "
42 "unstable.**\n\n**For the latest stable release of Cirq "
43 "see**\n`here <https://pypi.org/project/cirq>`__.\n\n" + long_description
44 )
45
46 # Read in requirements
47 requirements = open('requirements.txt').readlines()
48 requirements = [r.strip() for r in requirements]
49 contrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()
50 contrib_requirements = [r.strip() for r in contrib_requirements]
51 dev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()
52 dev_requirements = [r.strip() for r in dev_requirements]
53
54 cirq_packages = ['cirq'] + ['cirq.' + package for package in find_packages(where='cirq')]
55
56 # Sanity check
57 assert __version__, 'Version string cannot be empty'
58
59 setup(
60 name=name,
61 version=__version__,
62 url='http://github.com/quantumlib/cirq',
63 author='The Cirq Developers',
64 author_email='[email protected]',
65 python_requires=('>=3.6.0'),
66 install_requires=requirements,
67 extras_require={
68 'contrib': contrib_requirements,
69 'dev_env': dev_requirements + contrib_requirements,
70 },
71 license='Apache 2',
72 description=description,
73 long_description=long_description,
74 packages=cirq_packages,
75 package_data={
76 'cirq': ['py.typed'],
77 'cirq.google.api.v1': ['*.proto', '*.pyi'],
78 'cirq.google.api.v2': ['*.proto', '*.pyi'],
79 'cirq.protocols.json_test_data': ['*'],
80 },
81 )
82
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -62,7 +62,7 @@
url='http://github.com/quantumlib/cirq',
author='The Cirq Developers',
author_email='[email protected]',
- python_requires=('>=3.6.0'),
+ python_requires=('>=3.7.0'),
install_requires=requirements,
extras_require={
'contrib': contrib_requirements,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -62,7 +62,7 @@\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n- python_requires=('>=3.6.0'),\n+ python_requires=('>=3.7.0'),\n install_requires=requirements,\n extras_require={\n 'contrib': contrib_requirements,\n", "issue": "Docs build is failing\nSince the black formatter merge the RTD builds are failing with some weird pip error:\r\n\r\nhttps://readthedocs.org/projects/cirq/builds/\r\n\r\nNeed to look into it and resolve it if the error is on our end or report it to the RTD team if it's on their end.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\nfrom setuptools import find_packages, setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\nexec(open('cirq/_version.py').read())\n\nname = 'cirq'\n\ndescription = (\n 'A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.'\n)\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# If CIRQ_PRE_RELEASE_VERSION is set then we update the version to this value.\n# It is assumed that it ends with one of `.devN`, `.aN`, `.bN`, `.rcN` and hence\n# it will be a pre-release version on PyPi. See\n# https://packaging.python.org/guides/distributing-packages-using-setuptools/#pre-release-versioning\n# for more details.\nif 'CIRQ_PRE_RELEASE_VERSION' in os.environ:\n __version__ = os.environ['CIRQ_PRE_RELEASE_VERSION']\n long_description = (\n \"**This is a development version of Cirq and may be \"\n \"unstable.**\\n\\n**For the latest stable release of Cirq \"\n \"see**\\n`here <https://pypi.org/project/cirq>`__.\\n\\n\" + long_description\n )\n\n# Read in requirements\nrequirements = open('requirements.txt').readlines()\nrequirements = [r.strip() for r in requirements]\ncontrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()\ncontrib_requirements = [r.strip() for r in contrib_requirements]\ndev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()\ndev_requirements = [r.strip() for r in dev_requirements]\n\ncirq_packages = ['cirq'] + ['cirq.' + package for package in find_packages(where='cirq')]\n\n# Sanity check\nassert __version__, 'Version string cannot be empty'\n\nsetup(\n name=name,\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires=('>=3.6.0'),\n install_requires=requirements,\n extras_require={\n 'contrib': contrib_requirements,\n 'dev_env': dev_requirements + contrib_requirements,\n },\n license='Apache 2',\n description=description,\n long_description=long_description,\n packages=cirq_packages,\n package_data={\n 'cirq': ['py.typed'],\n 'cirq.google.api.v1': ['*.proto', '*.pyi'],\n 'cirq.google.api.v2': ['*.proto', '*.pyi'],\n 'cirq.protocols.json_test_data': ['*'],\n },\n)\n", "path": "setup.py"}]} | 1,480 | 107 |
gh_patches_debug_27198 | rasdani/github-patches | git_diff | python-poetry__poetry-1910 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
poetry complains about missing argument when using `--help`
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [ ] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
## Issue
<!-- Now feel free to write your issue, but please be descriptive! Thanks again 🙌 ❤️ -->
I don't know whether this is a poetry issue or cleo and if this problem arises in earlier versions.
When I type `poetry add --help` I receive the error message
```
Not enough arguments (missing: "name").
```
Similar for `poetry remove --help`
```
Not enough arguments (missing: "packages").
```
If I append any name I get the help page.
The expected behavior would be, that whenever I use `--help`, the help page should be displayed and mandatory arguments for sub command shouldn't be checked.
Saw this with version 1.0.0b6 and 1.0.0b7
</issue>
<code>
[start of poetry/console/config/application_config.py]
1 import logging
2
3 from typing import Any
4
5 from cleo.config import ApplicationConfig as BaseApplicationConfig
6 from clikit.api.application.application import Application
7 from clikit.api.args.raw_args import RawArgs
8 from clikit.api.event import PRE_HANDLE
9 from clikit.api.event import PreHandleEvent
10 from clikit.api.event import PreResolveEvent
11 from clikit.api.event.event_dispatcher import EventDispatcher
12 from clikit.api.formatter import Style
13 from clikit.api.io import Input
14 from clikit.api.io import InputStream
15 from clikit.api.io import Output
16 from clikit.api.io import OutputStream
17 from clikit.api.io.flags import DEBUG
18 from clikit.api.io.flags import VERBOSE
19 from clikit.api.io.flags import VERY_VERBOSE
20 from clikit.api.io.io import IO
21 from clikit.formatter import AnsiFormatter
22 from clikit.formatter import PlainFormatter
23 from clikit.io.input_stream import StandardInputStream
24 from clikit.io.output_stream import ErrorOutputStream
25 from clikit.io.output_stream import StandardOutputStream
26
27 from poetry.console.commands.command import Command
28 from poetry.console.commands.env_command import EnvCommand
29 from poetry.console.logging.io_formatter import IOFormatter
30 from poetry.console.logging.io_handler import IOHandler
31
32
33 class ApplicationConfig(BaseApplicationConfig):
34 def configure(self):
35 super(ApplicationConfig, self).configure()
36
37 self.add_style(Style("c1").fg("cyan"))
38 self.add_style(Style("info").fg("blue"))
39 self.add_style(Style("comment").fg("green"))
40 self.add_style(Style("error").fg("red").bold())
41 self.add_style(Style("warning").fg("yellow"))
42 self.add_style(Style("debug").fg("black").bold())
43
44 self.add_event_listener(PRE_HANDLE, self.register_command_loggers)
45 self.add_event_listener(PRE_HANDLE, self.set_env)
46
47 def register_command_loggers(
48 self, event, event_name, _
49 ): # type: (PreHandleEvent, str, Any) -> None
50 command = event.command.config.handler
51 if not isinstance(command, Command):
52 return
53
54 io = event.io
55
56 loggers = ["poetry.packages.package", "poetry.utils.password_manager"]
57
58 loggers += command.loggers
59
60 handler = IOHandler(io)
61 handler.setFormatter(IOFormatter())
62
63 for logger in loggers:
64 logger = logging.getLogger(logger)
65
66 logger.handlers = [handler]
67 logger.propagate = False
68
69 level = logging.WARNING
70 if io.is_debug():
71 level = logging.DEBUG
72 elif io.is_very_verbose() or io.is_verbose():
73 level = logging.INFO
74
75 logger.setLevel(level)
76
77 def set_env(self, event, event_name, _): # type: (PreHandleEvent, str, Any) -> None
78 from poetry.utils.env import EnvManager
79
80 command = event.command.config.handler # type: EnvCommand
81 if not isinstance(command, EnvCommand):
82 return
83
84 io = event.io
85 poetry = command.poetry
86
87 env_manager = EnvManager(poetry)
88 env = env_manager.create_venv(io)
89
90 if env.is_venv() and io.is_verbose():
91 io.write_line("Using virtualenv: <comment>{}</>".format(env.path))
92
93 command.set_env(env)
94
95 def resolve_help_command(
96 self, event, event_name, dispatcher
97 ): # type: (PreResolveEvent, str, EventDispatcher) -> None
98 args = event.raw_args
99 application = event.application
100
101 if args.has_option_token("-h") or args.has_option_token("--help"):
102 from clikit.api.resolver import ResolvedCommand
103
104 resolved_command = self.command_resolver.resolve(args, application)
105 # If the current command is the run one, skip option
106 # check and interpret them as part of the executed command
107 if resolved_command.command.name == "run":
108 event.set_resolved_command(resolved_command)
109
110 return event.stop_propagation()
111
112 command = application.get_command("help")
113
114 # Enable lenient parsing
115 parsed_args = command.parse(args, True)
116
117 event.set_resolved_command(ResolvedCommand(command, parsed_args))
118 event.stop_propagation()
119
120 def create_io(
121 self,
122 application,
123 args,
124 input_stream=None,
125 output_stream=None,
126 error_stream=None,
127 ): # type: (Application, RawArgs, InputStream, OutputStream, OutputStream) -> IO
128 if input_stream is None:
129 input_stream = StandardInputStream()
130
131 if output_stream is None:
132 output_stream = StandardOutputStream()
133
134 if error_stream is None:
135 error_stream = ErrorOutputStream()
136
137 style_set = application.config.style_set
138
139 if output_stream.supports_ansi():
140 output_formatter = AnsiFormatter(style_set)
141 else:
142 output_formatter = PlainFormatter(style_set)
143
144 if error_stream.supports_ansi():
145 error_formatter = AnsiFormatter(style_set)
146 else:
147 error_formatter = PlainFormatter(style_set)
148
149 io = self.io_class(
150 Input(input_stream),
151 Output(output_stream, output_formatter),
152 Output(error_stream, error_formatter),
153 )
154
155 resolved_command = application.resolve_command(args)
156 # If the current command is the run one, skip option
157 # check and interpret them as part of the executed command
158 if resolved_command.command.name == "run":
159 return io
160
161 if args.has_option_token("--no-ansi"):
162 formatter = PlainFormatter(style_set)
163 io.output.set_formatter(formatter)
164 io.error_output.set_formatter(formatter)
165 elif args.has_option_token("--ansi"):
166 formatter = AnsiFormatter(style_set, True)
167 io.output.set_formatter(formatter)
168 io.error_output.set_formatter(formatter)
169
170 if args.has_option_token("-vvv") or self.is_debug():
171 io.set_verbosity(DEBUG)
172 elif args.has_option_token("-vv"):
173 io.set_verbosity(VERY_VERBOSE)
174 elif args.has_option_token("-v"):
175 io.set_verbosity(VERBOSE)
176
177 if args.has_option_token("--quiet") or args.has_option_token("-q"):
178 io.set_quiet(True)
179
180 if args.has_option_token("--no-interaction") or args.has_option_token("-n"):
181 io.set_interactive(False)
182
183 return io
184
[end of poetry/console/config/application_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/poetry/console/config/application_config.py b/poetry/console/config/application_config.py
--- a/poetry/console/config/application_config.py
+++ b/poetry/console/config/application_config.py
@@ -9,6 +9,7 @@
from clikit.api.event import PreHandleEvent
from clikit.api.event import PreResolveEvent
from clikit.api.event.event_dispatcher import EventDispatcher
+from clikit.api.exceptions import CliKitException
from clikit.api.formatter import Style
from clikit.api.io import Input
from clikit.api.io import InputStream
@@ -101,7 +102,16 @@
if args.has_option_token("-h") or args.has_option_token("--help"):
from clikit.api.resolver import ResolvedCommand
- resolved_command = self.command_resolver.resolve(args, application)
+ try:
+ resolved_command = self.command_resolver.resolve(args, application)
+ except CliKitException:
+ # We weren't able to resolve the command,
+ # due to a parse error most likely,
+ # so we fall back on the default behavior
+ return super(ApplicationConfig, self).resolve_help_command(
+ event, event_name, dispatcher
+ )
+
# If the current command is the run one, skip option
# check and interpret them as part of the executed command
if resolved_command.command.name == "run":
| {"golden_diff": "diff --git a/poetry/console/config/application_config.py b/poetry/console/config/application_config.py\n--- a/poetry/console/config/application_config.py\n+++ b/poetry/console/config/application_config.py\n@@ -9,6 +9,7 @@\n from clikit.api.event import PreHandleEvent\n from clikit.api.event import PreResolveEvent\n from clikit.api.event.event_dispatcher import EventDispatcher\n+from clikit.api.exceptions import CliKitException\n from clikit.api.formatter import Style\n from clikit.api.io import Input\n from clikit.api.io import InputStream\n@@ -101,7 +102,16 @@\n if args.has_option_token(\"-h\") or args.has_option_token(\"--help\"):\n from clikit.api.resolver import ResolvedCommand\n \n- resolved_command = self.command_resolver.resolve(args, application)\n+ try:\n+ resolved_command = self.command_resolver.resolve(args, application)\n+ except CliKitException:\n+ # We weren't able to resolve the command,\n+ # due to a parse error most likely,\n+ # so we fall back on the default behavior\n+ return super(ApplicationConfig, self).resolve_help_command(\n+ event, event_name, dispatcher\n+ )\n+\n # If the current command is the run one, skip option\n # check and interpret them as part of the executed command\n if resolved_command.command.name == \"run\":\n", "issue": "poetry complains about missing argument when using `--help`\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [ ] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n## Issue\r\n<!-- Now feel free to write your issue, but please be descriptive! Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\n\r\nI don't know whether this is a poetry issue or cleo and if this problem arises in earlier versions.\r\n\r\nWhen I type `poetry add --help` I receive the error message\r\n\r\n```\r\nNot enough arguments (missing: \"name\").\r\n```\r\n\r\nSimilar for `poetry remove --help`\r\n\r\n```\r\nNot enough arguments (missing: \"packages\").\r\n```\r\n\r\nIf I append any name I get the help page.\r\n\r\nThe expected behavior would be, that whenever I use `--help`, the help page should be displayed and mandatory arguments for sub command shouldn't be checked.\r\n\r\nSaw this with version 1.0.0b6 and 1.0.0b7\n", "before_files": [{"content": "import logging\n\nfrom typing import Any\n\nfrom cleo.config import ApplicationConfig as BaseApplicationConfig\nfrom clikit.api.application.application import Application\nfrom clikit.api.args.raw_args import RawArgs\nfrom clikit.api.event import PRE_HANDLE\nfrom clikit.api.event import PreHandleEvent\nfrom clikit.api.event import PreResolveEvent\nfrom clikit.api.event.event_dispatcher import EventDispatcher\nfrom clikit.api.formatter import Style\nfrom clikit.api.io import Input\nfrom clikit.api.io import InputStream\nfrom clikit.api.io import Output\nfrom clikit.api.io import OutputStream\nfrom clikit.api.io.flags import DEBUG\nfrom clikit.api.io.flags import VERBOSE\nfrom clikit.api.io.flags import VERY_VERBOSE\nfrom clikit.api.io.io import IO\nfrom clikit.formatter import AnsiFormatter\nfrom clikit.formatter import PlainFormatter\nfrom clikit.io.input_stream import StandardInputStream\nfrom clikit.io.output_stream import ErrorOutputStream\nfrom clikit.io.output_stream import StandardOutputStream\n\nfrom poetry.console.commands.command import Command\nfrom poetry.console.commands.env_command import EnvCommand\nfrom poetry.console.logging.io_formatter import IOFormatter\nfrom poetry.console.logging.io_handler import IOHandler\n\n\nclass ApplicationConfig(BaseApplicationConfig):\n def configure(self):\n super(ApplicationConfig, self).configure()\n\n self.add_style(Style(\"c1\").fg(\"cyan\"))\n self.add_style(Style(\"info\").fg(\"blue\"))\n self.add_style(Style(\"comment\").fg(\"green\"))\n self.add_style(Style(\"error\").fg(\"red\").bold())\n self.add_style(Style(\"warning\").fg(\"yellow\"))\n self.add_style(Style(\"debug\").fg(\"black\").bold())\n\n self.add_event_listener(PRE_HANDLE, self.register_command_loggers)\n self.add_event_listener(PRE_HANDLE, self.set_env)\n\n def register_command_loggers(\n self, event, event_name, _\n ): # type: (PreHandleEvent, str, Any) -> None\n command = event.command.config.handler\n if not isinstance(command, Command):\n return\n\n io = event.io\n\n loggers = [\"poetry.packages.package\", \"poetry.utils.password_manager\"]\n\n loggers += command.loggers\n\n handler = IOHandler(io)\n handler.setFormatter(IOFormatter())\n\n for logger in loggers:\n logger = logging.getLogger(logger)\n\n logger.handlers = [handler]\n logger.propagate = False\n\n level = logging.WARNING\n if io.is_debug():\n level = logging.DEBUG\n elif io.is_very_verbose() or io.is_verbose():\n level = logging.INFO\n\n logger.setLevel(level)\n\n def set_env(self, event, event_name, _): # type: (PreHandleEvent, str, Any) -> None\n from poetry.utils.env import EnvManager\n\n command = event.command.config.handler # type: EnvCommand\n if not isinstance(command, EnvCommand):\n return\n\n io = event.io\n poetry = command.poetry\n\n env_manager = EnvManager(poetry)\n env = env_manager.create_venv(io)\n\n if env.is_venv() and io.is_verbose():\n io.write_line(\"Using virtualenv: <comment>{}</>\".format(env.path))\n\n command.set_env(env)\n\n def resolve_help_command(\n self, event, event_name, dispatcher\n ): # type: (PreResolveEvent, str, EventDispatcher) -> None\n args = event.raw_args\n application = event.application\n\n if args.has_option_token(\"-h\") or args.has_option_token(\"--help\"):\n from clikit.api.resolver import ResolvedCommand\n\n resolved_command = self.command_resolver.resolve(args, application)\n # If the current command is the run one, skip option\n # check and interpret them as part of the executed command\n if resolved_command.command.name == \"run\":\n event.set_resolved_command(resolved_command)\n\n return event.stop_propagation()\n\n command = application.get_command(\"help\")\n\n # Enable lenient parsing\n parsed_args = command.parse(args, True)\n\n event.set_resolved_command(ResolvedCommand(command, parsed_args))\n event.stop_propagation()\n\n def create_io(\n self,\n application,\n args,\n input_stream=None,\n output_stream=None,\n error_stream=None,\n ): # type: (Application, RawArgs, InputStream, OutputStream, OutputStream) -> IO\n if input_stream is None:\n input_stream = StandardInputStream()\n\n if output_stream is None:\n output_stream = StandardOutputStream()\n\n if error_stream is None:\n error_stream = ErrorOutputStream()\n\n style_set = application.config.style_set\n\n if output_stream.supports_ansi():\n output_formatter = AnsiFormatter(style_set)\n else:\n output_formatter = PlainFormatter(style_set)\n\n if error_stream.supports_ansi():\n error_formatter = AnsiFormatter(style_set)\n else:\n error_formatter = PlainFormatter(style_set)\n\n io = self.io_class(\n Input(input_stream),\n Output(output_stream, output_formatter),\n Output(error_stream, error_formatter),\n )\n\n resolved_command = application.resolve_command(args)\n # If the current command is the run one, skip option\n # check and interpret them as part of the executed command\n if resolved_command.command.name == \"run\":\n return io\n\n if args.has_option_token(\"--no-ansi\"):\n formatter = PlainFormatter(style_set)\n io.output.set_formatter(formatter)\n io.error_output.set_formatter(formatter)\n elif args.has_option_token(\"--ansi\"):\n formatter = AnsiFormatter(style_set, True)\n io.output.set_formatter(formatter)\n io.error_output.set_formatter(formatter)\n\n if args.has_option_token(\"-vvv\") or self.is_debug():\n io.set_verbosity(DEBUG)\n elif args.has_option_token(\"-vv\"):\n io.set_verbosity(VERY_VERBOSE)\n elif args.has_option_token(\"-v\"):\n io.set_verbosity(VERBOSE)\n\n if args.has_option_token(\"--quiet\") or args.has_option_token(\"-q\"):\n io.set_quiet(True)\n\n if args.has_option_token(\"--no-interaction\") or args.has_option_token(\"-n\"):\n io.set_interactive(False)\n\n return io\n", "path": "poetry/console/config/application_config.py"}]} | 2,705 | 308 |
gh_patches_debug_2706 | rasdani/github-patches | git_diff | fossasia__open-event-server-4302 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Custom-forms: Change data.type in custom-form
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
**Current behavior:**
The type attribute is `custom_form` which leads to error 409 while making a request after #4300
**Expected behavior:**
The type attribute should be `custom-form`
@enigmaeth Can you please check?
</issue>
<code>
[start of app/api/custom_forms.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from marshmallow_jsonapi import fields
4 import marshmallow.validate as validate
5 from app.api.helpers.permissions import jwt_required
6 from flask_rest_jsonapi.exceptions import ObjectNotFound
7
8 from app.api.bootstrap import api
9 from app.api.helpers.utilities import dasherize
10 from app.models import db
11 from app.models.custom_form import CustomForms
12 from app.models.event import Event
13 from app.api.helpers.db import safe_query
14 from app.api.helpers.utilities import require_relationship
15 from app.api.helpers.permission_manager import has_access
16 from app.api.helpers.query import event_query
17
18
19 class CustomFormSchema(Schema):
20 """
21 API Schema for Custom Forms database model
22 """
23 class Meta:
24 """
25 Meta class for CustomForm Schema
26 """
27 type_ = 'custom_form'
28 self_view = 'v1.custom_form_detail'
29 self_view_kwargs = {'id': '<id>'}
30 inflect = dasherize
31
32 id = fields.Integer(dump_only=True)
33 field_identifier = fields.Str(required=True)
34 form = fields.Str(required=True)
35 type = fields.Str(default="text", validate=validate.OneOf(
36 choices=["text", "checkbox", "select", "file", "image"]))
37 is_required = fields.Boolean(default=False)
38 is_included = fields.Boolean(default=False)
39 is_fixed = fields.Boolean(default=False)
40 event = Relationship(attribute='event',
41 self_view='v1.custom_form_event',
42 self_view_kwargs={'id': '<id>'},
43 related_view='v1.event_detail',
44 related_view_kwargs={'custom_form_id': '<id>'},
45 schema='EventSchema',
46 type_='event')
47
48
49 class CustomFormListPost(ResourceList):
50 """
51 Create and List Custom Forms
52 """
53
54 def before_post(self, args, kwargs, data):
55 """
56 method to check for required relationship with event
57 :param args:
58 :param kwargs:
59 :param data:
60 :return:
61 """
62 require_relationship(['event'], data)
63 if not has_access('is_coorganizer', event_id=data['event']):
64 raise ObjectNotFound({'parameter': 'event_id'},
65 "Event: {} not found".format(data['event_id']))
66
67 schema = CustomFormSchema
68 methods = ['POST', ]
69 data_layer = {'session': db.session,
70 'model': CustomForms
71 }
72
73
74 class CustomFormList(ResourceList):
75 """
76 Create and List Custom Forms
77 """
78 def query(self, view_kwargs):
79 """
80 query method for different view_kwargs
81 :param view_kwargs:
82 :return:
83 """
84 query_ = self.session.query(CustomForms)
85 query_ = event_query(self, query_, view_kwargs)
86 return query_
87
88 view_kwargs = True
89 decorators = (jwt_required, )
90 methods = ['GET', ]
91 schema = CustomFormSchema
92 data_layer = {'session': db.session,
93 'model': CustomForms,
94 'methods': {
95 'query': query
96 }}
97
98
99 class CustomFormDetail(ResourceDetail):
100 """
101 CustomForm Resource
102 """
103
104 def before_get_object(self, view_kwargs):
105 """
106 before get method
107 :param view_kwargs:
108 :return:
109 """
110 event = None
111 if view_kwargs.get('event_id'):
112 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
113 elif view_kwargs.get('event_identifier'):
114 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
115
116 if event:
117 custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')
118 view_kwargs['id'] = custom_form.id
119
120 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
121 fetch_as="event_id", model=CustomForms, methods="PATCH,DELETE"), )
122 schema = CustomFormSchema
123 data_layer = {'session': db.session,
124 'model': CustomForms}
125
126
127 class CustomFormRelationshipRequired(ResourceRelationship):
128 """
129 CustomForm Relationship (Required)
130 """
131 decorators = (api.has_permission('is_coorganizer', fetch='event_id',
132 fetch_as="event_id", model=CustomForms, methods="PATCH"),)
133 methods = ['GET', 'PATCH']
134 schema = CustomFormSchema
135 data_layer = {'session': db.session,
136 'model': CustomForms}
137
[end of app/api/custom_forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/custom_forms.py b/app/api/custom_forms.py
--- a/app/api/custom_forms.py
+++ b/app/api/custom_forms.py
@@ -24,7 +24,7 @@
"""
Meta class for CustomForm Schema
"""
- type_ = 'custom_form'
+ type_ = 'custom-form'
self_view = 'v1.custom_form_detail'
self_view_kwargs = {'id': '<id>'}
inflect = dasherize
| {"golden_diff": "diff --git a/app/api/custom_forms.py b/app/api/custom_forms.py\n--- a/app/api/custom_forms.py\n+++ b/app/api/custom_forms.py\n@@ -24,7 +24,7 @@\n \"\"\"\n Meta class for CustomForm Schema\n \"\"\"\n- type_ = 'custom_form'\n+ type_ = 'custom-form'\n self_view = 'v1.custom_form_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n", "issue": "Custom-forms: Change data.type in custom-form\n**I'm submitting a ...** (check one with \"x\")\r\n- [x] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server\r\n\r\n**Current behavior:**\r\nThe type attribute is `custom_form` which leads to error 409 while making a request after #4300 \r\n\r\n**Expected behavior:**\r\nThe type attribute should be `custom-form` \r\n\r\n@enigmaeth Can you please check?\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nimport marshmallow.validate as validate\nfrom app.api.helpers.permissions import jwt_required\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.utilities import dasherize\nfrom app.models import db\nfrom app.models.custom_form import CustomForms\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.query import event_query\n\n\nclass CustomFormSchema(Schema):\n \"\"\"\n API Schema for Custom Forms database model\n \"\"\"\n class Meta:\n \"\"\"\n Meta class for CustomForm Schema\n \"\"\"\n type_ = 'custom_form'\n self_view = 'v1.custom_form_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Integer(dump_only=True)\n field_identifier = fields.Str(required=True)\n form = fields.Str(required=True)\n type = fields.Str(default=\"text\", validate=validate.OneOf(\n choices=[\"text\", \"checkbox\", \"select\", \"file\", \"image\"]))\n is_required = fields.Boolean(default=False)\n is_included = fields.Boolean(default=False)\n is_fixed = fields.Boolean(default=False)\n event = Relationship(attribute='event',\n self_view='v1.custom_form_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'custom_form_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass CustomFormListPost(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n \"\"\"\n method to check for required relationship with event\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound({'parameter': 'event_id'},\n \"Event: {} not found\".format(data['event_id']))\n\n schema = CustomFormSchema\n methods = ['POST', ]\n data_layer = {'session': db.session,\n 'model': CustomForms\n }\n\n\nclass CustomFormList(ResourceList):\n \"\"\"\n Create and List Custom Forms\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(CustomForms)\n query_ = event_query(self, query_, view_kwargs)\n return query_\n\n view_kwargs = True\n decorators = (jwt_required, )\n methods = ['GET', ]\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms,\n 'methods': {\n 'query': query\n }}\n\n\nclass CustomFormDetail(ResourceDetail):\n \"\"\"\n CustomForm Resource\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method\n :param view_kwargs:\n :return:\n \"\"\"\n event = None\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n\n if event:\n custom_form = safe_query(self, CustomForms, 'event_id', event.id, 'event_id')\n view_kwargs['id'] = custom_form.id\n\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH,DELETE\"), )\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n\n\nclass CustomFormRelationshipRequired(ResourceRelationship):\n \"\"\"\n CustomForm Relationship (Required)\n \"\"\"\n decorators = (api.has_permission('is_coorganizer', fetch='event_id',\n fetch_as=\"event_id\", model=CustomForms, methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n schema = CustomFormSchema\n data_layer = {'session': db.session,\n 'model': CustomForms}\n", "path": "app/api/custom_forms.py"}]} | 1,928 | 105 |
gh_patches_debug_30587 | rasdani/github-patches | git_diff | networkx__networkx-2618 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`networkx.version` shadows any other module named `version` if imported first
Steps to reproduce:
```
$ pip freeze | grep networkx
networkx==1.11
$ touch version.py
$ python -c 'import version; print(version)'
<module 'version' from '/Users/ben/scratch/version.py'>
$ python -c 'import networkx; import version; print(version)'
<module 'version' from '/Users/ben/.virtualenvs/personal/lib/python3.6/site-packages/networkx/version.py'>
```
Reading the code, it looks like the `release` module is adding the networkx package to `sys.path`, importing version and deleting it again?
</issue>
<code>
[start of networkx/release.py]
1 """Release data for NetworkX.
2
3 When NetworkX is imported a number of steps are followed to determine
4 the version information.
5
6 1) If the release is not a development release (dev=False), then version
7 information is read from version.py, a file containing statically
8 defined version information. This file should exist on every
9 downloadable release of NetworkX since setup.py creates it during
10 packaging/installation. However, version.py might not exist if one
11 is running NetworkX from the mercurial repository. In the event that
12 version.py does not exist, then no vcs information will be available.
13
14 2) If the release is a development release, then version information
15 is read dynamically, when possible. If no dynamic information can be
16 read, then an attempt is made to read the information from version.py.
17 If version.py does not exist, then no vcs information will be available.
18
19 Clarification:
20 version.py is created only by setup.py
21
22 When setup.py creates version.py, it does so before packaging/installation.
23 So the created file is included in the source distribution. When a user
24 downloads a tar.gz file and extracts the files, the files will not be in a
25 live version control repository. So when the user runs setup.py to install
26 NetworkX, we must make sure write_versionfile() does not overwrite the
27 revision information contained in the version.py that was included in the
28 tar.gz file. This is why write_versionfile() includes an early escape.
29
30 """
31
32 # Copyright (C) 2004-2017 by
33 # Aric Hagberg <[email protected]>
34 # Dan Schult <[email protected]>
35 # Pieter Swart <[email protected]>
36 # All rights reserved.
37 # BSD license.
38
39 from __future__ import absolute_import
40
41 import os
42 import sys
43 import time
44 import datetime
45
46 basedir = os.path.abspath(os.path.split(__file__)[0])
47
48
49 def write_versionfile():
50 """Creates a static file containing version information."""
51 versionfile = os.path.join(basedir, 'version.py')
52
53 text = '''"""
54 Version information for NetworkX, created during installation.
55
56 Do not add this file to the repository.
57
58 """
59
60 import datetime
61
62 version = %(version)r
63 date = %(date)r
64
65 # Was NetworkX built from a development version? If so, remember that the major
66 # and minor versions reference the "target" (rather than "current") release.
67 dev = %(dev)r
68
69 # Format: (name, major, min, revision)
70 version_info = %(version_info)r
71
72 # Format: a 'datetime.datetime' instance
73 date_info = %(date_info)r
74
75 # Format: (vcs, vcs_tuple)
76 vcs_info = %(vcs_info)r
77
78 '''
79
80 # Try to update all information
81 date, date_info, version, version_info, vcs_info = get_info(dynamic=True)
82
83 def writefile():
84 fh = open(versionfile, 'w')
85 subs = {
86 'dev': dev,
87 'version': version,
88 'version_info': version_info,
89 'date': date,
90 'date_info': date_info,
91 'vcs_info': vcs_info
92 }
93 fh.write(text % subs)
94 fh.close()
95
96 if vcs_info[0] == 'mercurial':
97 # Then, we want to update version.py.
98 writefile()
99 else:
100 if os.path.isfile(versionfile):
101 # This is *good*, and the most likely place users will be when
102 # running setup.py. We do not want to overwrite version.py.
103 # Grab the version so that setup can use it.
104 sys.path.insert(0, basedir)
105 from version import version
106 del sys.path[0]
107 else:
108 # This is *bad*. It means the user might have a tarball that
109 # does not include version.py. Let this error raise so we can
110 # fix the tarball.
111 ##raise Exception('version.py not found!')
112
113 # We no longer require that prepared tarballs include a version.py
114 # So we use the possibly trunctated value from get_info()
115 # Then we write a new file.
116 writefile()
117
118 return version
119
120
121 def get_revision():
122 """Returns revision and vcs information, dynamically obtained."""
123 vcs, revision, tag = None, None, None
124
125 gitdir = os.path.join(basedir, '..', '.git')
126
127 if os.path.isdir(gitdir):
128 vcs = 'git'
129 # For now, we are not bothering with revision and tag.
130
131 vcs_info = (vcs, (revision, tag))
132
133 return revision, vcs_info
134
135
136 def get_info(dynamic=True):
137 # Date information
138 date_info = datetime.datetime.now()
139 date = time.asctime(date_info.timetuple())
140
141 revision, version, version_info, vcs_info = None, None, None, None
142
143 import_failed = False
144 dynamic_failed = False
145
146 if dynamic:
147 revision, vcs_info = get_revision()
148 if revision is None:
149 dynamic_failed = True
150
151 if dynamic_failed or not dynamic:
152 # This is where most final releases of NetworkX will be.
153 # All info should come from version.py. If it does not exist, then
154 # no vcs information will be provided.
155 sys.path.insert(0, basedir)
156 try:
157 from version import date, date_info, version, version_info, vcs_info
158 except ImportError:
159 import_failed = True
160 vcs_info = (None, (None, None))
161 else:
162 revision = vcs_info[1][0]
163 del sys.path[0]
164
165 if import_failed or (dynamic and not dynamic_failed):
166 # We are here if:
167 # we failed to determine static versioning info, or
168 # we successfully obtained dynamic revision info
169 version = ''.join([str(major), '.', str(minor)])
170 if dev:
171 version += '.dev_' + date_info.strftime("%Y%m%d%H%M%S")
172 version_info = (name, major, minor, revision)
173
174 return date, date_info, version, version_info, vcs_info
175
176
177 # Version information
178 name = 'networkx'
179 major = "2"
180 minor = "0"
181
182
183 # Declare current release as a development release.
184 # Change to False before tagging a release; then change back.
185 dev = True
186
187
188 description = "Python package for creating and manipulating graphs and networks"
189
190 long_description = \
191 """
192 NetworkX is a Python package for the creation, manipulation, and
193 study of the structure, dynamics, and functions of complex networks.
194
195 """
196 license = 'BSD'
197 authors = {'Hagberg': ('Aric Hagberg', '[email protected]'),
198 'Schult': ('Dan Schult', '[email protected]'),
199 'Swart': ('Pieter Swart', '[email protected]')
200 }
201 maintainer = "NetworkX Developers"
202 maintainer_email = "[email protected]"
203 url = 'http://networkx.github.io/'
204 download_url = 'https://pypi.python.org/pypi/networkx/'
205 platforms = ['Linux', 'Mac OSX', 'Windows', 'Unix']
206 keywords = ['Networks', 'Graph Theory', 'Mathematics',
207 'network', 'graph', 'discrete mathematics', 'math']
208 classifiers = [
209 'Development Status :: 5 - Production/Stable',
210 'Intended Audience :: Developers',
211 'Intended Audience :: Science/Research',
212 'License :: OSI Approved :: BSD License',
213 'Operating System :: OS Independent',
214 'Programming Language :: Python :: 2',
215 'Programming Language :: Python :: 2.7',
216 'Programming Language :: Python :: 3',
217 'Programming Language :: Python :: 3.4',
218 'Programming Language :: Python :: 3.5',
219 'Programming Language :: Python :: 3.6',
220 'Topic :: Software Development :: Libraries :: Python Modules',
221 'Topic :: Scientific/Engineering :: Bio-Informatics',
222 'Topic :: Scientific/Engineering :: Information Analysis',
223 'Topic :: Scientific/Engineering :: Mathematics',
224 'Topic :: Scientific/Engineering :: Physics']
225
226 date, date_info, version, version_info, vcs_info = get_info()
227
228 if __name__ == '__main__':
229 # Write versionfile for nightly snapshots.
230 write_versionfile()
231
[end of networkx/release.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/networkx/release.py b/networkx/release.py
--- a/networkx/release.py
+++ b/networkx/release.py
@@ -101,9 +101,9 @@
# This is *good*, and the most likely place users will be when
# running setup.py. We do not want to overwrite version.py.
# Grab the version so that setup can use it.
- sys.path.insert(0, basedir)
+ #sys.path.insert(0, basedir)
from version import version
- del sys.path[0]
+ #del sys.path[0]
else:
# This is *bad*. It means the user might have a tarball that
# does not include version.py. Let this error raise so we can
@@ -152,7 +152,7 @@
# This is where most final releases of NetworkX will be.
# All info should come from version.py. If it does not exist, then
# no vcs information will be provided.
- sys.path.insert(0, basedir)
+ #sys.path.insert(0, basedir)
try:
from version import date, date_info, version, version_info, vcs_info
except ImportError:
@@ -160,7 +160,7 @@
vcs_info = (None, (None, None))
else:
revision = vcs_info[1][0]
- del sys.path[0]
+ #del sys.path[0]
if import_failed or (dynamic and not dynamic_failed):
# We are here if:
| {"golden_diff": "diff --git a/networkx/release.py b/networkx/release.py\n--- a/networkx/release.py\n+++ b/networkx/release.py\n@@ -101,9 +101,9 @@\n # This is *good*, and the most likely place users will be when\n # running setup.py. We do not want to overwrite version.py.\n # Grab the version so that setup can use it.\n- sys.path.insert(0, basedir)\n+ #sys.path.insert(0, basedir)\n from version import version\n- del sys.path[0]\n+ #del sys.path[0]\n else:\n # This is *bad*. It means the user might have a tarball that\n # does not include version.py. Let this error raise so we can\n@@ -152,7 +152,7 @@\n # This is where most final releases of NetworkX will be.\n # All info should come from version.py. If it does not exist, then\n # no vcs information will be provided.\n- sys.path.insert(0, basedir)\n+ #sys.path.insert(0, basedir)\n try:\n from version import date, date_info, version, version_info, vcs_info\n except ImportError:\n@@ -160,7 +160,7 @@\n vcs_info = (None, (None, None))\n else:\n revision = vcs_info[1][0]\n- del sys.path[0]\n+ #del sys.path[0]\n \n if import_failed or (dynamic and not dynamic_failed):\n # We are here if:\n", "issue": "`networkx.version` shadows any other module named `version` if imported first\nSteps to reproduce:\r\n\r\n```\r\n$ pip freeze | grep networkx\r\nnetworkx==1.11\r\n$ touch version.py\r\n$ python -c 'import version; print(version)'\r\n<module 'version' from '/Users/ben/scratch/version.py'>\r\n$ python -c 'import networkx; import version; print(version)'\r\n<module 'version' from '/Users/ben/.virtualenvs/personal/lib/python3.6/site-packages/networkx/version.py'>\r\n```\r\n\r\nReading the code, it looks like the `release` module is adding the networkx package to `sys.path`, importing version and deleting it again?\n", "before_files": [{"content": "\"\"\"Release data for NetworkX.\n\nWhen NetworkX is imported a number of steps are followed to determine\nthe version information.\n\n 1) If the release is not a development release (dev=False), then version\n information is read from version.py, a file containing statically\n defined version information. This file should exist on every\n downloadable release of NetworkX since setup.py creates it during\n packaging/installation. However, version.py might not exist if one\n is running NetworkX from the mercurial repository. In the event that\n version.py does not exist, then no vcs information will be available.\n\n 2) If the release is a development release, then version information\n is read dynamically, when possible. If no dynamic information can be\n read, then an attempt is made to read the information from version.py.\n If version.py does not exist, then no vcs information will be available.\n\nClarification:\n version.py is created only by setup.py\n\nWhen setup.py creates version.py, it does so before packaging/installation.\nSo the created file is included in the source distribution. When a user\ndownloads a tar.gz file and extracts the files, the files will not be in a\nlive version control repository. So when the user runs setup.py to install\nNetworkX, we must make sure write_versionfile() does not overwrite the\nrevision information contained in the version.py that was included in the\ntar.gz file. This is why write_versionfile() includes an early escape.\n\n\"\"\"\n\n# Copyright (C) 2004-2017 by\n# Aric Hagberg <[email protected]>\n# Dan Schult <[email protected]>\n# Pieter Swart <[email protected]>\n# All rights reserved.\n# BSD license.\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport time\nimport datetime\n\nbasedir = os.path.abspath(os.path.split(__file__)[0])\n\n\ndef write_versionfile():\n \"\"\"Creates a static file containing version information.\"\"\"\n versionfile = os.path.join(basedir, 'version.py')\n\n text = '''\"\"\"\nVersion information for NetworkX, created during installation.\n\nDo not add this file to the repository.\n\n\"\"\"\n\nimport datetime\n\nversion = %(version)r\ndate = %(date)r\n\n# Was NetworkX built from a development version? If so, remember that the major\n# and minor versions reference the \"target\" (rather than \"current\") release.\ndev = %(dev)r\n\n# Format: (name, major, min, revision)\nversion_info = %(version_info)r\n\n# Format: a 'datetime.datetime' instance\ndate_info = %(date_info)r\n\n# Format: (vcs, vcs_tuple)\nvcs_info = %(vcs_info)r\n\n'''\n\n # Try to update all information\n date, date_info, version, version_info, vcs_info = get_info(dynamic=True)\n\n def writefile():\n fh = open(versionfile, 'w')\n subs = {\n 'dev': dev,\n 'version': version,\n 'version_info': version_info,\n 'date': date,\n 'date_info': date_info,\n 'vcs_info': vcs_info\n }\n fh.write(text % subs)\n fh.close()\n\n if vcs_info[0] == 'mercurial':\n # Then, we want to update version.py.\n writefile()\n else:\n if os.path.isfile(versionfile):\n # This is *good*, and the most likely place users will be when\n # running setup.py. We do not want to overwrite version.py.\n # Grab the version so that setup can use it.\n sys.path.insert(0, basedir)\n from version import version\n del sys.path[0]\n else:\n # This is *bad*. It means the user might have a tarball that\n # does not include version.py. Let this error raise so we can\n # fix the tarball.\n ##raise Exception('version.py not found!')\n\n # We no longer require that prepared tarballs include a version.py\n # So we use the possibly trunctated value from get_info()\n # Then we write a new file.\n writefile()\n\n return version\n\n\ndef get_revision():\n \"\"\"Returns revision and vcs information, dynamically obtained.\"\"\"\n vcs, revision, tag = None, None, None\n\n gitdir = os.path.join(basedir, '..', '.git')\n\n if os.path.isdir(gitdir):\n vcs = 'git'\n # For now, we are not bothering with revision and tag.\n\n vcs_info = (vcs, (revision, tag))\n\n return revision, vcs_info\n\n\ndef get_info(dynamic=True):\n # Date information\n date_info = datetime.datetime.now()\n date = time.asctime(date_info.timetuple())\n\n revision, version, version_info, vcs_info = None, None, None, None\n\n import_failed = False\n dynamic_failed = False\n\n if dynamic:\n revision, vcs_info = get_revision()\n if revision is None:\n dynamic_failed = True\n\n if dynamic_failed or not dynamic:\n # This is where most final releases of NetworkX will be.\n # All info should come from version.py. If it does not exist, then\n # no vcs information will be provided.\n sys.path.insert(0, basedir)\n try:\n from version import date, date_info, version, version_info, vcs_info\n except ImportError:\n import_failed = True\n vcs_info = (None, (None, None))\n else:\n revision = vcs_info[1][0]\n del sys.path[0]\n\n if import_failed or (dynamic and not dynamic_failed):\n # We are here if:\n # we failed to determine static versioning info, or\n # we successfully obtained dynamic revision info\n version = ''.join([str(major), '.', str(minor)])\n if dev:\n version += '.dev_' + date_info.strftime(\"%Y%m%d%H%M%S\")\n version_info = (name, major, minor, revision)\n\n return date, date_info, version, version_info, vcs_info\n\n\n# Version information\nname = 'networkx'\nmajor = \"2\"\nminor = \"0\"\n\n\n# Declare current release as a development release.\n# Change to False before tagging a release; then change back.\ndev = True\n\n\ndescription = \"Python package for creating and manipulating graphs and networks\"\n\nlong_description = \\\n \"\"\"\nNetworkX is a Python package for the creation, manipulation, and\nstudy of the structure, dynamics, and functions of complex networks.\n\n\"\"\"\nlicense = 'BSD'\nauthors = {'Hagberg': ('Aric Hagberg', '[email protected]'),\n 'Schult': ('Dan Schult', '[email protected]'),\n 'Swart': ('Pieter Swart', '[email protected]')\n }\nmaintainer = \"NetworkX Developers\"\nmaintainer_email = \"[email protected]\"\nurl = 'http://networkx.github.io/'\ndownload_url = 'https://pypi.python.org/pypi/networkx/'\nplatforms = ['Linux', 'Mac OSX', 'Windows', 'Unix']\nkeywords = ['Networks', 'Graph Theory', 'Mathematics',\n 'network', 'graph', 'discrete mathematics', 'math']\nclassifiers = [\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Scientific/Engineering :: Bio-Informatics',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics']\n\ndate, date_info, version, version_info, vcs_info = get_info()\n\nif __name__ == '__main__':\n # Write versionfile for nightly snapshots.\n write_versionfile()\n", "path": "networkx/release.py"}]} | 3,116 | 353 |
gh_patches_debug_37642 | rasdani/github-patches | git_diff | learningequality__kolibri-12059 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature Request: Add --manifest-only option to exportcontent
My understanding is that 0.16 will generate a channel manifest during
`kolibri manage exportcontent`
My request is that you add an option that will not do the export of content but only generate the manifest. This manifest could then be used on another remote install to import from network the same set of content.
```[tasklist]
### Tasks
- [ ] Add --manifest-only command line option to the exportcontent management command
- [ ] If this option is selected, generate the manifest, but skip copying any files (channel database files, and content files)
- [ ] Write tests to confirm the --manifest-only behaviour
```
</issue>
<code>
[start of kolibri/core/content/management/commands/exportcontent.py]
1 import logging
2 import os
3
4 from django.core.management.base import CommandError
5
6 from ...utils import paths
7 from kolibri.core.content.errors import InvalidStorageFilenameError
8 from kolibri.core.content.models import ChannelMetadata
9 from kolibri.core.content.utils.content_manifest import ContentManifest
10 from kolibri.core.content.utils.import_export_content import get_content_nodes_data
11 from kolibri.core.content.utils.import_export_content import get_import_export_nodes
12 from kolibri.core.content.utils.paths import get_content_file_name
13 from kolibri.core.tasks.management.commands.base import AsyncCommand
14 from kolibri.core.tasks.utils import get_current_job
15 from kolibri.utils import file_transfer as transfer
16
17 logger = logging.getLogger(__name__)
18
19
20 class Command(AsyncCommand):
21 exported_size = 0
22 total_resources = 0
23
24 def add_arguments(self, parser):
25 node_ids_help_text = """
26 Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.
27 Make sure to call this near the end of the argument list.
28
29 e.g.
30
31 kolibri manage importcontent network <channel id> --node_ids <id1>,<id2>, [<ids>,...]
32 """
33 parser.add_argument(
34 "--node_ids",
35 "-n",
36 # Split the comma separated string we get, into a list of strings
37 type=lambda x: x.split(",") if x else [],
38 default=None,
39 required=False,
40 dest="node_ids",
41 help=node_ids_help_text,
42 )
43
44 exclude_node_ids_help_text = """
45 Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.
46 Make sure to call this near the end of the argument list.
47
48 e.g.
49
50 kolibri manage importcontent network <channel id> --exclude_node_ids <id1>,<id2>, [<ids>,...]
51 """
52 parser.add_argument(
53 "--exclude_node_ids",
54 type=lambda x: x.split(",") if x else [],
55 default=None,
56 required=False,
57 dest="exclude_node_ids",
58 help=exclude_node_ids_help_text,
59 )
60
61 parser.add_argument("channel_id", type=str)
62 parser.add_argument("destination", type=str)
63
64 def update_job_metadata(self, total_bytes_to_transfer, total_resource_count):
65 job = get_current_job()
66 if job:
67 job.extra_metadata["file_size"] = total_bytes_to_transfer
68 job.extra_metadata["total_resources"] = total_resource_count
69 job.save_meta()
70
71 def handle_async(self, *args, **options):
72 if paths.using_remote_storage():
73 raise CommandError("Cannot export files when using remote file storage")
74 channel_id = options["channel_id"]
75 data_dir = os.path.realpath(options["destination"])
76 node_ids = options["node_ids"]
77 exclude_node_ids = options["exclude_node_ids"]
78 logger.info(
79 "Exporting content for channel id {} to {}".format(channel_id, data_dir)
80 )
81
82 channel_metadata = ChannelMetadata.objects.get(id=channel_id)
83
84 nodes_queries_list = get_import_export_nodes(
85 channel_id, node_ids, exclude_node_ids, available=True
86 )
87
88 (total_resource_count, files, total_bytes_to_transfer) = get_content_nodes_data(
89 channel_id, nodes_queries_list, available=True
90 )
91
92 self.update_job_metadata(total_bytes_to_transfer, total_resource_count)
93
94 exported_files = []
95
96 with self.start_progress(
97 total=total_bytes_to_transfer
98 ) as overall_progress_update:
99 for f in files:
100
101 if self.is_cancelled():
102 break
103
104 dest = self.export_file(f, data_dir, overall_progress_update)
105 if dest:
106 exported_files.append(dest)
107
108 # Reraise any cancellation
109 self.check_for_cancel()
110
111 logger.info(
112 "Exporting manifest for channel id {} to {}".format(channel_id, data_dir)
113 )
114
115 manifest_path = os.path.join(data_dir, "content", "manifest.json")
116 content_manifest = ContentManifest()
117 content_manifest.read(manifest_path)
118 content_manifest.add_content_nodes(
119 channel_id, channel_metadata.version, nodes_queries_list
120 )
121 content_manifest.write(manifest_path)
122
123 def export_file(self, f, data_dir, overall_progress_update):
124 filename = get_content_file_name(f)
125 try:
126 srcpath = paths.get_content_storage_file_path(filename)
127 dest = paths.get_content_storage_file_path(filename, datafolder=data_dir)
128 except InvalidStorageFilenameError:
129 # If any files have an invalid storage file name, don't export them.
130 overall_progress_update(f["file_size"])
131 return
132
133 # if the file already exists, add its size to our overall progress, and skip
134 if os.path.isfile(dest) and os.path.getsize(dest) == f["file_size"]:
135 overall_progress_update(f["file_size"])
136 return
137 copy = transfer.FileCopy(srcpath, dest, cancel_check=self.is_cancelled)
138 with copy, self.start_progress(
139 total=copy.transfer_size
140 ) as file_cp_progress_update:
141
142 def progress_update(length):
143 self.exported_size = self.exported_size + length
144 overall_progress_update(length)
145 file_cp_progress_update(length)
146
147 try:
148 copy.run(progress_update=progress_update)
149 except transfer.TransferCanceled:
150 job = get_current_job()
151 if job:
152 job.extra_metadata["file_size"] = self.exported_size
153 job.extra_metadata["total_resources"] = 0
154 job.save_meta()
155 return
156 return dest
157
[end of kolibri/core/content/management/commands/exportcontent.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kolibri/core/content/management/commands/exportcontent.py b/kolibri/core/content/management/commands/exportcontent.py
--- a/kolibri/core/content/management/commands/exportcontent.py
+++ b/kolibri/core/content/management/commands/exportcontent.py
@@ -60,6 +60,12 @@
parser.add_argument("channel_id", type=str)
parser.add_argument("destination", type=str)
+ parser.add_argument(
+ "--manifest-only",
+ action="store_true",
+ default=False,
+ help="Generate only the manifest.json file",
+ )
def update_job_metadata(self, total_bytes_to_transfer, total_resource_count):
job = get_current_job()
@@ -75,9 +81,6 @@
data_dir = os.path.realpath(options["destination"])
node_ids = options["node_ids"]
exclude_node_ids = options["exclude_node_ids"]
- logger.info(
- "Exporting content for channel id {} to {}".format(channel_id, data_dir)
- )
channel_metadata = ChannelMetadata.objects.get(id=channel_id)
@@ -91,19 +94,11 @@
self.update_job_metadata(total_bytes_to_transfer, total_resource_count)
- exported_files = []
-
- with self.start_progress(
- total=total_bytes_to_transfer
- ) as overall_progress_update:
- for f in files:
-
- if self.is_cancelled():
- break
-
- dest = self.export_file(f, data_dir, overall_progress_update)
- if dest:
- exported_files.append(dest)
+ # dont copy files if we are only exporting the manifest
+ if not options["manifest_only"]:
+ self.copy_content_files(
+ channel_id, data_dir, files, total_bytes_to_transfer
+ )
# Reraise any cancellation
self.check_for_cancel()
@@ -120,6 +115,18 @@
)
content_manifest.write(manifest_path)
+ def copy_content_files(self, channel_id, data_dir, files, total_bytes_to_transfer):
+ logger.info(
+ "Exporting content for channel id {} to {}".format(channel_id, data_dir)
+ )
+ with self.start_progress(
+ total=total_bytes_to_transfer
+ ) as overall_progress_update:
+ for f in files:
+ if self.is_cancelled():
+ break
+ self.export_file(f, data_dir, overall_progress_update)
+
def export_file(self, f, data_dir, overall_progress_update):
filename = get_content_file_name(f)
try:
| {"golden_diff": "diff --git a/kolibri/core/content/management/commands/exportcontent.py b/kolibri/core/content/management/commands/exportcontent.py\n--- a/kolibri/core/content/management/commands/exportcontent.py\n+++ b/kolibri/core/content/management/commands/exportcontent.py\n@@ -60,6 +60,12 @@\n \n parser.add_argument(\"channel_id\", type=str)\n parser.add_argument(\"destination\", type=str)\n+ parser.add_argument(\n+ \"--manifest-only\",\n+ action=\"store_true\",\n+ default=False,\n+ help=\"Generate only the manifest.json file\",\n+ )\n \n def update_job_metadata(self, total_bytes_to_transfer, total_resource_count):\n job = get_current_job()\n@@ -75,9 +81,6 @@\n data_dir = os.path.realpath(options[\"destination\"])\n node_ids = options[\"node_ids\"]\n exclude_node_ids = options[\"exclude_node_ids\"]\n- logger.info(\n- \"Exporting content for channel id {} to {}\".format(channel_id, data_dir)\n- )\n \n channel_metadata = ChannelMetadata.objects.get(id=channel_id)\n \n@@ -91,19 +94,11 @@\n \n self.update_job_metadata(total_bytes_to_transfer, total_resource_count)\n \n- exported_files = []\n-\n- with self.start_progress(\n- total=total_bytes_to_transfer\n- ) as overall_progress_update:\n- for f in files:\n-\n- if self.is_cancelled():\n- break\n-\n- dest = self.export_file(f, data_dir, overall_progress_update)\n- if dest:\n- exported_files.append(dest)\n+ # dont copy files if we are only exporting the manifest\n+ if not options[\"manifest_only\"]:\n+ self.copy_content_files(\n+ channel_id, data_dir, files, total_bytes_to_transfer\n+ )\n \n # Reraise any cancellation\n self.check_for_cancel()\n@@ -120,6 +115,18 @@\n )\n content_manifest.write(manifest_path)\n \n+ def copy_content_files(self, channel_id, data_dir, files, total_bytes_to_transfer):\n+ logger.info(\n+ \"Exporting content for channel id {} to {}\".format(channel_id, data_dir)\n+ )\n+ with self.start_progress(\n+ total=total_bytes_to_transfer\n+ ) as overall_progress_update:\n+ for f in files:\n+ if self.is_cancelled():\n+ break\n+ self.export_file(f, data_dir, overall_progress_update)\n+\n def export_file(self, f, data_dir, overall_progress_update):\n filename = get_content_file_name(f)\n try:\n", "issue": "Feature Request: Add --manifest-only option to exportcontent\nMy understanding is that 0.16 will generate a channel manifest during \r\n\r\n`kolibri manage exportcontent`\r\n\r\nMy request is that you add an option that will not do the export of content but only generate the manifest. This manifest could then be used on another remote install to import from network the same set of content.\r\n\r\n\r\n```[tasklist]\r\n### Tasks\r\n- [ ] Add --manifest-only command line option to the exportcontent management command\r\n- [ ] If this option is selected, generate the manifest, but skip copying any files (channel database files, and content files)\r\n- [ ] Write tests to confirm the --manifest-only behaviour\r\n```\r\n\n", "before_files": [{"content": "import logging\nimport os\n\nfrom django.core.management.base import CommandError\n\nfrom ...utils import paths\nfrom kolibri.core.content.errors import InvalidStorageFilenameError\nfrom kolibri.core.content.models import ChannelMetadata\nfrom kolibri.core.content.utils.content_manifest import ContentManifest\nfrom kolibri.core.content.utils.import_export_content import get_content_nodes_data\nfrom kolibri.core.content.utils.import_export_content import get_import_export_nodes\nfrom kolibri.core.content.utils.paths import get_content_file_name\nfrom kolibri.core.tasks.management.commands.base import AsyncCommand\nfrom kolibri.core.tasks.utils import get_current_job\nfrom kolibri.utils import file_transfer as transfer\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(AsyncCommand):\n exported_size = 0\n total_resources = 0\n\n def add_arguments(self, parser):\n node_ids_help_text = \"\"\"\n Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.\n Make sure to call this near the end of the argument list.\n\n e.g.\n\n kolibri manage importcontent network <channel id> --node_ids <id1>,<id2>, [<ids>,...]\n \"\"\"\n parser.add_argument(\n \"--node_ids\",\n \"-n\",\n # Split the comma separated string we get, into a list of strings\n type=lambda x: x.split(\",\") if x else [],\n default=None,\n required=False,\n dest=\"node_ids\",\n help=node_ids_help_text,\n )\n\n exclude_node_ids_help_text = \"\"\"\n Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.\n Make sure to call this near the end of the argument list.\n\n e.g.\n\n kolibri manage importcontent network <channel id> --exclude_node_ids <id1>,<id2>, [<ids>,...]\n \"\"\"\n parser.add_argument(\n \"--exclude_node_ids\",\n type=lambda x: x.split(\",\") if x else [],\n default=None,\n required=False,\n dest=\"exclude_node_ids\",\n help=exclude_node_ids_help_text,\n )\n\n parser.add_argument(\"channel_id\", type=str)\n parser.add_argument(\"destination\", type=str)\n\n def update_job_metadata(self, total_bytes_to_transfer, total_resource_count):\n job = get_current_job()\n if job:\n job.extra_metadata[\"file_size\"] = total_bytes_to_transfer\n job.extra_metadata[\"total_resources\"] = total_resource_count\n job.save_meta()\n\n def handle_async(self, *args, **options):\n if paths.using_remote_storage():\n raise CommandError(\"Cannot export files when using remote file storage\")\n channel_id = options[\"channel_id\"]\n data_dir = os.path.realpath(options[\"destination\"])\n node_ids = options[\"node_ids\"]\n exclude_node_ids = options[\"exclude_node_ids\"]\n logger.info(\n \"Exporting content for channel id {} to {}\".format(channel_id, data_dir)\n )\n\n channel_metadata = ChannelMetadata.objects.get(id=channel_id)\n\n nodes_queries_list = get_import_export_nodes(\n channel_id, node_ids, exclude_node_ids, available=True\n )\n\n (total_resource_count, files, total_bytes_to_transfer) = get_content_nodes_data(\n channel_id, nodes_queries_list, available=True\n )\n\n self.update_job_metadata(total_bytes_to_transfer, total_resource_count)\n\n exported_files = []\n\n with self.start_progress(\n total=total_bytes_to_transfer\n ) as overall_progress_update:\n for f in files:\n\n if self.is_cancelled():\n break\n\n dest = self.export_file(f, data_dir, overall_progress_update)\n if dest:\n exported_files.append(dest)\n\n # Reraise any cancellation\n self.check_for_cancel()\n\n logger.info(\n \"Exporting manifest for channel id {} to {}\".format(channel_id, data_dir)\n )\n\n manifest_path = os.path.join(data_dir, \"content\", \"manifest.json\")\n content_manifest = ContentManifest()\n content_manifest.read(manifest_path)\n content_manifest.add_content_nodes(\n channel_id, channel_metadata.version, nodes_queries_list\n )\n content_manifest.write(manifest_path)\n\n def export_file(self, f, data_dir, overall_progress_update):\n filename = get_content_file_name(f)\n try:\n srcpath = paths.get_content_storage_file_path(filename)\n dest = paths.get_content_storage_file_path(filename, datafolder=data_dir)\n except InvalidStorageFilenameError:\n # If any files have an invalid storage file name, don't export them.\n overall_progress_update(f[\"file_size\"])\n return\n\n # if the file already exists, add its size to our overall progress, and skip\n if os.path.isfile(dest) and os.path.getsize(dest) == f[\"file_size\"]:\n overall_progress_update(f[\"file_size\"])\n return\n copy = transfer.FileCopy(srcpath, dest, cancel_check=self.is_cancelled)\n with copy, self.start_progress(\n total=copy.transfer_size\n ) as file_cp_progress_update:\n\n def progress_update(length):\n self.exported_size = self.exported_size + length\n overall_progress_update(length)\n file_cp_progress_update(length)\n\n try:\n copy.run(progress_update=progress_update)\n except transfer.TransferCanceled:\n job = get_current_job()\n if job:\n job.extra_metadata[\"file_size\"] = self.exported_size\n job.extra_metadata[\"total_resources\"] = 0\n job.save_meta()\n return\n return dest\n", "path": "kolibri/core/content/management/commands/exportcontent.py"}]} | 2,233 | 574 |
gh_patches_debug_38180 | rasdani/github-patches | git_diff | streamlink__streamlink-5147 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.nimotv: live stream stops after couple of seconds
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
try any live streams of nimo.tv, the live stream will stop in couple of seconds.
### Debug log
```text
c:\temp>streamlink --loglevel debug https://www.nimo.tv/live/97747608 best
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.7.8
[cli][debug] Streamlink: 5.1.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2018.11.29
[cli][debug] isodate: 0.6.0
[cli][debug] lxml: 4.6.4
[cli][debug] pycountry: 20.7.3
[cli][debug] pycryptodome: 3.7.3
[cli][debug] PySocks: 1.6.8
[cli][debug] requests: 2.26.0
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.2.1
[cli][debug] importlib-metadata: 3.10.0
[cli][debug] Arguments:
[cli][debug] url=https://www.nimo.tv/live/97747608
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin nimotv for URL https://www.nimo.tv/live/97747608
[plugins.nimotv][debug] URL=http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8
[cli][info] Available streams: 240p (worst), 360p, 480p, 720p, 1080p (best)
[cli][info] Opening stream: 1080p (hls)
[cli][info] Starting player: "C:\Program Files\VideoLAN\VLC\vlc.exe"
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 1674443953; Last Sequence: 1674443955
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1674443953; End Sequence: None
[stream.hls][debug] Adding segment 1674443953 to queue
[stream.hls][debug] Adding segment 1674443954 to queue
[stream.hls][debug] Adding segment 1674443955 to queue
[stream.hls][debug] Writing segment 1674443953 to output
[stream.hls][debug] Segment 1674443953 complete
[cli.output][debug] Opening subprocess: "C:\Program Files\VideoLAN\VLC\vlc.exe" --input-title-format https://www.nimo.tv/live/97747608 -
[cli][debug] Writing stream to output
[stream.hls][debug] Writing segment 1674443954 to output
[stream.hls][debug] Segment 1674443954 complete
[stream.hls][debug] Writing segment 1674443955 to output
[stream.hls][debug] Segment 1674443955 complete
[stream.hls][debug] Reloading playlist
[stream.hls][warning] Failed to reload playlist: Unable to open URL: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000 (403 Client Error: Forbidden for url: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000&appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000)
[stream.hls][debug] Reloading playlist
[stream.hls][warning] Failed to reload playlist: Unable to open URL: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000 (403 Client Error: Forbidden for url: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000&appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000)
[stream.hls][debug] Reloading playlist
[stream.hls][warning] Failed to reload playlist: Unable to open URL: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000 (403 Client Error: Forbidden for url: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000&appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000)
```
</issue>
<code>
[start of src/streamlink/plugins/nimotv.py]
1 """
2 $description Chinese, global live-streaming platform run by Huya Live.
3 $url nimo.tv
4 $type live
5 """
6
7 import logging
8 import re
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.plugin.api import useragents, validate
12 from streamlink.stream.hls import HLSStream
13
14 log = logging.getLogger(__name__)
15
16
17 @pluginmatcher(re.compile(
18 r'https?://(?:www\.|m\.)?nimo\.tv/(?P<username>.*)'
19 ))
20 class NimoTV(Plugin):
21 data_url = 'https://m.nimo.tv/{0}'
22
23 video_qualities = {
24 250: '240p',
25 500: '360p',
26 1000: '480p',
27 2500: '720p',
28 6000: '1080p',
29 }
30
31 _re_appid = re.compile(br'appid=(\d+)')
32 _re_domain = re.compile(br'(https?:\/\/[A-Za-z]{2,3}.hls[A-Za-z\.\/]+)(?:V|&)')
33 _re_id = re.compile(br'id=([^|\\]+)')
34 _re_tp = re.compile(br'tp=(\d+)')
35
36 def _get_streams(self):
37 username = self.match.group('username')
38 if not username:
39 return
40
41 data = self.session.http.get(
42 self.data_url.format(username),
43 headers={
44 "User-Agent": useragents.ANDROID,
45 },
46 schema=validate.Schema(
47 re.compile(r"<script>var G_roomBaseInfo = ({.*?});</script>"),
48 validate.none_or_all(
49 validate.get(1),
50 validate.parse_json(),
51 {
52 "title": str,
53 "nickname": str,
54 "game": str,
55 "liveStreamStatus": int,
56 validate.optional("mStreamPkg"): str,
57 },
58 ),
59 ),
60 )
61
62 if data['liveStreamStatus'] == 0:
63 log.info('This stream is currently offline')
64 return
65
66 mStreamPkg = data.get('mStreamPkg')
67 if not mStreamPkg:
68 log.debug('missing mStreamPkg')
69 return
70
71 mStreamPkg = bytes.fromhex(mStreamPkg)
72 try:
73 _appid = self._re_appid.search(mStreamPkg).group(1).decode('utf-8')
74 _domain = self._re_domain.search(mStreamPkg).group(1).decode('utf-8')
75 _id = self._re_id.search(mStreamPkg).group(1).decode('utf-8')
76 _tp = self._re_tp.search(mStreamPkg).group(1).decode('utf-8')
77 except AttributeError:
78 log.error('invalid mStreamPkg')
79 return
80
81 params = {
82 'appid': _appid,
83 'id': _id,
84 'tp': _tp,
85 'u': '0',
86 't': '100',
87 'needwm': 1,
88 }
89 url = f'{_domain}{_id}.m3u8'
90 log.debug(f'URL={url}')
91 for k, v in self.video_qualities.items():
92 _params = params.copy()
93 _params["ratio"] = k
94 if v == '1080p':
95 _params["needwm"] = 0
96 elif v in ('720p', '480p', '360p'):
97 _params["sphd"] = 1
98
99 log.trace(f'{v} params={_params!r}')
100 # some qualities might not exist, but it will select a different lower quality
101 yield v, HLSStream(self.session, url, params=_params)
102
103 self.author = data['nickname']
104 self.category = data['game']
105 self.title = data['title']
106
107
108 __plugin__ = NimoTV
109
[end of src/streamlink/plugins/nimotv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/nimotv.py b/src/streamlink/plugins/nimotv.py
--- a/src/streamlink/plugins/nimotv.py
+++ b/src/streamlink/plugins/nimotv.py
@@ -9,7 +9,7 @@
from streamlink.plugin import Plugin, pluginmatcher
from streamlink.plugin.api import useragents, validate
-from streamlink.stream.hls import HLSStream
+from streamlink.stream.http import HTTPStream
log = logging.getLogger(__name__)
@@ -32,6 +32,8 @@
_re_domain = re.compile(br'(https?:\/\/[A-Za-z]{2,3}.hls[A-Za-z\.\/]+)(?:V|&)')
_re_id = re.compile(br'id=([^|\\]+)')
_re_tp = re.compile(br'tp=(\d+)')
+ _re_wsSecret = re.compile(br'wsSecret=(\w+)')
+ _re_wsTime = re.compile(br'wsTime=(\w+)')
def _get_streams(self):
username = self.match.group('username')
@@ -74,6 +76,8 @@
_domain = self._re_domain.search(mStreamPkg).group(1).decode('utf-8')
_id = self._re_id.search(mStreamPkg).group(1).decode('utf-8')
_tp = self._re_tp.search(mStreamPkg).group(1).decode('utf-8')
+ _wsSecret = self._re_wsSecret.search(mStreamPkg).group(1).decode('utf-8')
+ _wsTime = self._re_wsTime.search(mStreamPkg).group(1).decode('utf-8')
except AttributeError:
log.error('invalid mStreamPkg')
return
@@ -82,11 +86,14 @@
'appid': _appid,
'id': _id,
'tp': _tp,
+ 'wsSecret': _wsSecret,
+ 'wsTime': _wsTime,
'u': '0',
't': '100',
'needwm': 1,
}
- url = f'{_domain}{_id}.m3u8'
+ url = f'{_domain}{_id}.flv'
+ url = url.replace('hls.nimo.tv', 'flv.nimo.tv')
log.debug(f'URL={url}')
for k, v in self.video_qualities.items():
_params = params.copy()
@@ -98,7 +105,7 @@
log.trace(f'{v} params={_params!r}')
# some qualities might not exist, but it will select a different lower quality
- yield v, HLSStream(self.session, url, params=_params)
+ yield v, HTTPStream(self.session, url, params=_params)
self.author = data['nickname']
self.category = data['game']
| {"golden_diff": "diff --git a/src/streamlink/plugins/nimotv.py b/src/streamlink/plugins/nimotv.py\n--- a/src/streamlink/plugins/nimotv.py\n+++ b/src/streamlink/plugins/nimotv.py\n@@ -9,7 +9,7 @@\n \n from streamlink.plugin import Plugin, pluginmatcher\n from streamlink.plugin.api import useragents, validate\n-from streamlink.stream.hls import HLSStream\n+from streamlink.stream.http import HTTPStream\n \n log = logging.getLogger(__name__)\n \n@@ -32,6 +32,8 @@\n _re_domain = re.compile(br'(https?:\\/\\/[A-Za-z]{2,3}.hls[A-Za-z\\.\\/]+)(?:V|&)')\n _re_id = re.compile(br'id=([^|\\\\]+)')\n _re_tp = re.compile(br'tp=(\\d+)')\n+ _re_wsSecret = re.compile(br'wsSecret=(\\w+)')\n+ _re_wsTime = re.compile(br'wsTime=(\\w+)')\n \n def _get_streams(self):\n username = self.match.group('username')\n@@ -74,6 +76,8 @@\n _domain = self._re_domain.search(mStreamPkg).group(1).decode('utf-8')\n _id = self._re_id.search(mStreamPkg).group(1).decode('utf-8')\n _tp = self._re_tp.search(mStreamPkg).group(1).decode('utf-8')\n+ _wsSecret = self._re_wsSecret.search(mStreamPkg).group(1).decode('utf-8')\n+ _wsTime = self._re_wsTime.search(mStreamPkg).group(1).decode('utf-8')\n except AttributeError:\n log.error('invalid mStreamPkg')\n return\n@@ -82,11 +86,14 @@\n 'appid': _appid,\n 'id': _id,\n 'tp': _tp,\n+ 'wsSecret': _wsSecret,\n+ 'wsTime': _wsTime,\n 'u': '0',\n 't': '100',\n 'needwm': 1,\n }\n- url = f'{_domain}{_id}.m3u8'\n+ url = f'{_domain}{_id}.flv'\n+ url = url.replace('hls.nimo.tv', 'flv.nimo.tv')\n log.debug(f'URL={url}')\n for k, v in self.video_qualities.items():\n _params = params.copy()\n@@ -98,7 +105,7 @@\n \n log.trace(f'{v} params={_params!r}')\n # some qualities might not exist, but it will select a different lower quality\n- yield v, HLSStream(self.session, url, params=_params)\n+ yield v, HTTPStream(self.session, url, params=_params)\n \n self.author = data['nickname']\n self.category = data['game']\n", "issue": "plugins.nimotv: live stream stops after couple of seconds\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\ntry any live streams of nimo.tv, the live stream will stop in couple of seconds.\n\n### Debug log\n\n```text\nc:\\temp>streamlink --loglevel debug https://www.nimo.tv/live/97747608 best\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.7.8\r\n[cli][debug] Streamlink: 5.1.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2018.11.29\r\n[cli][debug] isodate: 0.6.0\r\n[cli][debug] lxml: 4.6.4\r\n[cli][debug] pycountry: 20.7.3\r\n[cli][debug] pycryptodome: 3.7.3\r\n[cli][debug] PySocks: 1.6.8\r\n[cli][debug] requests: 2.26.0\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.2.1\r\n[cli][debug] importlib-metadata: 3.10.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.nimo.tv/live/97747608\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin nimotv for URL https://www.nimo.tv/live/97747608\r\n[plugins.nimotv][debug] URL=http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8\r\n[cli][info] Available streams: 240p (worst), 360p, 480p, 720p, 1080p (best)\r\n[cli][info] Opening stream: 1080p (hls)\r\n[cli][info] Starting player: \"C:\\Program Files\\VideoLAN\\VLC\\vlc.exe\"\r\n[stream.hls][debug] Reloading playlist\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[stream.hls][debug] First Sequence: 1674443953; Last Sequence: 1674443955\r\n[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1674443953; End Sequence: None\r\n[stream.hls][debug] Adding segment 1674443953 to queue\r\n[stream.hls][debug] Adding segment 1674443954 to queue\r\n[stream.hls][debug] Adding segment 1674443955 to queue\r\n[stream.hls][debug] Writing segment 1674443953 to output\r\n[stream.hls][debug] Segment 1674443953 complete\r\n[cli.output][debug] Opening subprocess: \"C:\\Program Files\\VideoLAN\\VLC\\vlc.exe\" --input-title-format https://www.nimo.tv/live/97747608 -\r\n[cli][debug] Writing stream to output\r\n[stream.hls][debug] Writing segment 1674443954 to output\r\n[stream.hls][debug] Segment 1674443954 complete\r\n[stream.hls][debug] Writing segment 1674443955 to output\r\n[stream.hls][debug] Segment 1674443955 complete\r\n[stream.hls][debug] Reloading playlist\r\n[stream.hls][warning] Failed to reload playlist: Unable to open URL: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000 (403 Client Error: Forbidden for url: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000&appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000)\r\n[stream.hls][debug] Reloading playlist\r\n[stream.hls][warning] Failed to reload playlist: Unable to open URL: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000 (403 Client Error: Forbidden for url: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000&appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000)\r\n[stream.hls][debug] Reloading playlist\r\n[stream.hls][warning] Failed to reload playlist: Unable to open URL: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000 (403 Client Error: Forbidden for url: http://tx.hls.nimo.tv/live/su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7.m3u8?appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000&appid=81&id=su1629530778249rea7ea30592d8ab7ce78a1a13e3037be7&tp=1674443951824&u=0&t=100&needwm=0&ratio=6000)\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Chinese, global live-streaming platform run by Huya Live.\n$url nimo.tv\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import useragents, validate\nfrom streamlink.stream.hls import HLSStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r'https?://(?:www\\.|m\\.)?nimo\\.tv/(?P<username>.*)'\n))\nclass NimoTV(Plugin):\n data_url = 'https://m.nimo.tv/{0}'\n\n video_qualities = {\n 250: '240p',\n 500: '360p',\n 1000: '480p',\n 2500: '720p',\n 6000: '1080p',\n }\n\n _re_appid = re.compile(br'appid=(\\d+)')\n _re_domain = re.compile(br'(https?:\\/\\/[A-Za-z]{2,3}.hls[A-Za-z\\.\\/]+)(?:V|&)')\n _re_id = re.compile(br'id=([^|\\\\]+)')\n _re_tp = re.compile(br'tp=(\\d+)')\n\n def _get_streams(self):\n username = self.match.group('username')\n if not username:\n return\n\n data = self.session.http.get(\n self.data_url.format(username),\n headers={\n \"User-Agent\": useragents.ANDROID,\n },\n schema=validate.Schema(\n re.compile(r\"<script>var G_roomBaseInfo = ({.*?});</script>\"),\n validate.none_or_all(\n validate.get(1),\n validate.parse_json(),\n {\n \"title\": str,\n \"nickname\": str,\n \"game\": str,\n \"liveStreamStatus\": int,\n validate.optional(\"mStreamPkg\"): str,\n },\n ),\n ),\n )\n\n if data['liveStreamStatus'] == 0:\n log.info('This stream is currently offline')\n return\n\n mStreamPkg = data.get('mStreamPkg')\n if not mStreamPkg:\n log.debug('missing mStreamPkg')\n return\n\n mStreamPkg = bytes.fromhex(mStreamPkg)\n try:\n _appid = self._re_appid.search(mStreamPkg).group(1).decode('utf-8')\n _domain = self._re_domain.search(mStreamPkg).group(1).decode('utf-8')\n _id = self._re_id.search(mStreamPkg).group(1).decode('utf-8')\n _tp = self._re_tp.search(mStreamPkg).group(1).decode('utf-8')\n except AttributeError:\n log.error('invalid mStreamPkg')\n return\n\n params = {\n 'appid': _appid,\n 'id': _id,\n 'tp': _tp,\n 'u': '0',\n 't': '100',\n 'needwm': 1,\n }\n url = f'{_domain}{_id}.m3u8'\n log.debug(f'URL={url}')\n for k, v in self.video_qualities.items():\n _params = params.copy()\n _params[\"ratio\"] = k\n if v == '1080p':\n _params[\"needwm\"] = 0\n elif v in ('720p', '480p', '360p'):\n _params[\"sphd\"] = 1\n\n log.trace(f'{v} params={_params!r}')\n # some qualities might not exist, but it will select a different lower quality\n yield v, HLSStream(self.session, url, params=_params)\n\n self.author = data['nickname']\n self.category = data['game']\n self.title = data['title']\n\n\n__plugin__ = NimoTV\n", "path": "src/streamlink/plugins/nimotv.py"}]} | 3,876 | 648 |
gh_patches_debug_227 | rasdani/github-patches | git_diff | sktime__sktime-3618 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] ShapeletTransformClassifier numba error when dtype is not float64
**Describe the bug**
Seems that when using `ShapeletTransformClassifier` there is some Numba accelerated functions that break if the data in the input data frame are of type `int32`.
**To Reproduce**
MRE as below:
```python
import warnings
warnings.simplefilter('ignore', category=FutureWarning)
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sktime.classification.shapelet_based import ShapeletTransformClassifier
from sktime.contrib.vector_classifiers._rotation_forest import RotationForest
# make fake data
data = pd.DataFrame(np.random.random((5000, 250))).astype(np.float32)
# reshape to input into Shapelet Classifier
data4train = data.apply(lambda row: pd.Series({
'time-series': pd.Series(row.values)
}), axis=1)
# make targets
targets = pd.Series(2500 * [1] + 2500 * [0])
# train test split
X_train, X_test, y_train, y_test = train_test_split(
data4train, targets, test_size=0.7, random_state=42
)
# train
clf = ShapeletTransformClassifier(
estimator=RotationForest(n_estimators=3),
n_shapelet_samples=500,
max_shapelets=20,
batch_size=100,
)
clf.fit(X_train, y_train)
```
**Expected behavior**
will not throw an error, and also enforce conversion to float32 or float64 within the classifier?
**Additional context**
removing conversion to `float32` (hence `dtype == float64`) will make the code running without issues.
**Versions**
numba 0.55.1
sklearn 0.24.1
sktime 0.11.0
pandas 1.4.2
python 3.8.10
**Stacktrace output**
```bash
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Cannot unify array(float64, 1d, C) and array(float32, 1d, C) for 'X_n.2', defined at /path_to_mypython/python/lib/python3.8/site-packages/sktime/utils/numba/general.py (39)
File "../python/lib/python3.8/site-packages/sktime/utils/numba/general.py", line 39:
def z_normalise_series(X):
<source elided>
return X_n
```
</issue>
<code>
[start of sktime/utils/numba/general.py]
1 # -*- coding: utf-8 -*-
2 """General numba utilities."""
3
4 import numpy as np
5 from numba import njit
6
7
8 @njit(fastmath=True, cache=True)
9 def unique_count(X):
10 """Numba unique count function for a 1D array."""
11 if len(X) > 0:
12 X = np.sort(X)
13 unique = np.zeros(len(X))
14 unique[0] = X[0]
15 counts = np.zeros(len(X), dtype=np.int_)
16 counts[0] = 1
17 unique_count = 0
18
19 for i in X[1:]:
20 if i != unique[unique_count]:
21 unique_count += 1
22 unique[unique_count] = i
23 counts[unique_count] = 1
24 else:
25 counts[unique_count] += 1
26 return unique[: unique_count + 1], counts[: unique_count + 1]
27 return None, np.zeros(0, dtype=np.int_)
28
29
30 @njit(fastmath=True, cache=True)
31 def z_normalise_series(X):
32 """Numba z-normalisation function for a single time series."""
33 std = np.std(X)
34 if std > 0:
35 X_n = (X - np.mean(X)) / std
36 else:
37 X_n = np.zeros(len(X))
38
39 return X_n
40
[end of sktime/utils/numba/general.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sktime/utils/numba/general.py b/sktime/utils/numba/general.py
--- a/sktime/utils/numba/general.py
+++ b/sktime/utils/numba/general.py
@@ -34,6 +34,5 @@
if std > 0:
X_n = (X - np.mean(X)) / std
else:
- X_n = np.zeros(len(X))
-
+ X_n = X - np.mean(X)
return X_n
| {"golden_diff": "diff --git a/sktime/utils/numba/general.py b/sktime/utils/numba/general.py\n--- a/sktime/utils/numba/general.py\n+++ b/sktime/utils/numba/general.py\n@@ -34,6 +34,5 @@\n if std > 0:\n X_n = (X - np.mean(X)) / std\n else:\n- X_n = np.zeros(len(X))\n-\n+ X_n = X - np.mean(X)\n return X_n\n", "issue": "[BUG] ShapeletTransformClassifier numba error when dtype is not float64\n**Describe the bug**\r\nSeems that when using `ShapeletTransformClassifier` there is some Numba accelerated functions that break if the data in the input data frame are of type `int32`.\r\n\r\n**To Reproduce**\r\nMRE as below:\r\n\r\n```python\r\nimport warnings\r\nwarnings.simplefilter('ignore', category=FutureWarning)\r\n\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nfrom sktime.classification.shapelet_based import ShapeletTransformClassifier\r\nfrom sktime.contrib.vector_classifiers._rotation_forest import RotationForest\r\n\r\n# make fake data\r\ndata = pd.DataFrame(np.random.random((5000, 250))).astype(np.float32)\r\n\r\n# reshape to input into Shapelet Classifier\r\ndata4train = data.apply(lambda row: pd.Series({\r\n 'time-series': pd.Series(row.values)\r\n}), axis=1)\r\n\r\n# make targets\r\ntargets = pd.Series(2500 * [1] + 2500 * [0])\r\n\r\n# train test split\r\nX_train, X_test, y_train, y_test = train_test_split(\r\n data4train, targets, test_size=0.7, random_state=42\r\n)\r\n\r\n# train\r\nclf = ShapeletTransformClassifier(\r\n estimator=RotationForest(n_estimators=3),\r\n n_shapelet_samples=500,\r\n max_shapelets=20,\r\n batch_size=100,\r\n)\r\n\r\nclf.fit(X_train, y_train)\r\n```\r\n\r\n**Expected behavior**\r\nwill not throw an error, and also enforce conversion to float32 or float64 within the classifier?\r\n**Additional context**\r\nremoving conversion to `float32` (hence `dtype == float64`) will make the code running without issues.\r\n\r\n**Versions**\r\nnumba 0.55.1\r\nsklearn 0.24.1\r\nsktime 0.11.0\r\npandas 1.4.2\r\npython 3.8.10\r\n\r\n**Stacktrace output**\r\n```bash\r\nTypingError: Failed in nopython mode pipeline (step: nopython frontend)\r\nCannot unify array(float64, 1d, C) and array(float32, 1d, C) for 'X_n.2', defined at /path_to_mypython/python/lib/python3.8/site-packages/sktime/utils/numba/general.py (39)\r\n\r\nFile \"../python/lib/python3.8/site-packages/sktime/utils/numba/general.py\", line 39:\r\ndef z_normalise_series(X):\r\n <source elided>\r\n\r\n return X_n\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"General numba utilities.\"\"\"\n\nimport numpy as np\nfrom numba import njit\n\n\n@njit(fastmath=True, cache=True)\ndef unique_count(X):\n \"\"\"Numba unique count function for a 1D array.\"\"\"\n if len(X) > 0:\n X = np.sort(X)\n unique = np.zeros(len(X))\n unique[0] = X[0]\n counts = np.zeros(len(X), dtype=np.int_)\n counts[0] = 1\n unique_count = 0\n\n for i in X[1:]:\n if i != unique[unique_count]:\n unique_count += 1\n unique[unique_count] = i\n counts[unique_count] = 1\n else:\n counts[unique_count] += 1\n return unique[: unique_count + 1], counts[: unique_count + 1]\n return None, np.zeros(0, dtype=np.int_)\n\n\n@njit(fastmath=True, cache=True)\ndef z_normalise_series(X):\n \"\"\"Numba z-normalisation function for a single time series.\"\"\"\n std = np.std(X)\n if std > 0:\n X_n = (X - np.mean(X)) / std\n else:\n X_n = np.zeros(len(X))\n\n return X_n\n", "path": "sktime/utils/numba/general.py"}]} | 1,468 | 111 |
gh_patches_debug_20253 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-901 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`mathesar_temp_schema` should be hidden
## Description
<!-- A clear and concise description of what the bug is. -->
Currently, the system schema `mathesar_temp_schema` is returned as a standard schema, and ends up displayed as a result in the UI. This is confusing, since that schema is used for system operations, and shouldn't be available to the user.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The schema `mathesar_temp_schema` should be hidden.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
After starting the service and doing type inference on at least one CSV loading into a table, go to `http://localhost:8000/api/v0/schemas/`. Note that `mathesar_temp_schema` will be one of the schemata in the `mathesar_tables` DB.
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
We're already hiding some schemata, e.g., `mathesar_types`. The implementer should figure out where the list of such schemata is, and add `mathesar_temp_schema` to that list.
</issue>
<code>
[start of db/tables/operations/infer_types.py]
1 from time import time
2
3 from sqlalchemy import select
4
5 from db import constants
6 from db.columns.base import MathesarColumn
7 from db.columns.operations.infer_types import infer_column_type
8 from db.schemas.operations.create import create_schema
9 from db.tables.operations.create import CreateTableAs
10 from db.tables.operations.select import reflect_table
11
12
13 TEMP_SCHEMA = f"{constants.MATHESAR_PREFIX}temp_schema"
14 TEMP_TABLE = f"{constants.MATHESAR_PREFIX}temp_table_%s"
15
16
17 def update_table_column_types(schema, table_name, engine):
18 table = reflect_table(table_name, schema, engine)
19 # we only want to infer (modify) the type of non-default columns
20 inferable_column_names = (
21 col.name for col in table.columns
22 if not MathesarColumn.from_column(col).is_default
23 and not col.primary_key
24 and not col.foreign_keys
25 )
26 for column_name in inferable_column_names:
27 infer_column_type(
28 schema,
29 table_name,
30 column_name,
31 engine,
32 )
33
34
35 def infer_table_column_types(schema, table_name, engine):
36 table = reflect_table(table_name, schema, engine)
37
38 temp_name = TEMP_TABLE % (int(time()))
39 create_schema(TEMP_SCHEMA, engine)
40 with engine.begin() as conn:
41 while engine.dialect.has_table(conn, temp_name, schema=TEMP_SCHEMA):
42 temp_name = TEMP_TABLE.format(int(time()))
43
44 full_temp_name = f"{TEMP_SCHEMA}.{temp_name}"
45
46 select_table = select(table)
47 with engine.begin() as conn:
48 conn.execute(CreateTableAs(full_temp_name, select_table))
49 temp_table = reflect_table(temp_name, TEMP_SCHEMA, engine)
50
51 try:
52 update_table_column_types(
53 TEMP_SCHEMA, temp_table.name, engine,
54 )
55 except Exception as e:
56 # Ensure the temp table is deleted
57 temp_table.drop()
58 raise e
59 else:
60 temp_table = reflect_table(temp_name, TEMP_SCHEMA, engine)
61 types = [c.type.__class__ for c in temp_table.columns]
62 temp_table.drop()
63 return types
64
[end of db/tables/operations/infer_types.py]
[start of db/schemas/operations/select.py]
1 import warnings
2
3 from sqlalchemy import MetaData, select, and_, not_, or_, Table
4
5 from db import types
6
7
8 TYPES_SCHEMA = types.base.SCHEMA
9 EXCLUDED_SCHEMATA = [TYPES_SCHEMA, "information_schema"]
10
11
12 def reflect_schema(engine, name=None, oid=None):
13 # If we have both arguments, the behavior is undefined.
14 try:
15 assert name is None or oid is None
16 except AssertionError as e:
17 raise e
18 metadata = MetaData()
19 with warnings.catch_warnings():
20 warnings.filterwarnings("ignore", message="Did not recognize type")
21 pg_namespace = Table("pg_namespace", metadata, autoload_with=engine)
22 sel = (
23 select(pg_namespace.c.oid, pg_namespace.c.nspname.label("name"))
24 .where(or_(pg_namespace.c.nspname == name, pg_namespace.c.oid == oid))
25 )
26 with engine.begin() as conn:
27 schema_info = conn.execute(sel).fetchone()
28 return schema_info
29
30
31 def get_mathesar_schemas_with_oids(engine):
32 metadata = MetaData()
33 with warnings.catch_warnings():
34 warnings.filterwarnings("ignore", message="Did not recognize type")
35 pg_namespace = Table("pg_namespace", metadata, autoload_with=engine)
36 sel = (
37 select(pg_namespace.c.nspname.label('schema'), pg_namespace.c.oid)
38 .where(
39 and_(
40 *[pg_namespace.c.nspname != schema for schema in EXCLUDED_SCHEMATA],
41 not_(pg_namespace.c.nspname.like("pg_%"))
42 )
43 )
44 )
45 with engine.begin() as conn:
46 result = conn.execute(sel).fetchall()
47 return result
48
[end of db/schemas/operations/select.py]
[start of db/constants.py]
1 MATHESAR_PREFIX = "mathesar_"
2 ID = "id"
3 ID_ORIGINAL = "id_original"
4
[end of db/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/constants.py b/db/constants.py
--- a/db/constants.py
+++ b/db/constants.py
@@ -1,3 +1,4 @@
MATHESAR_PREFIX = "mathesar_"
ID = "id"
ID_ORIGINAL = "id_original"
+INFERENCE_SCHEMA = f"{MATHESAR_PREFIX}inference_schema"
diff --git a/db/schemas/operations/select.py b/db/schemas/operations/select.py
--- a/db/schemas/operations/select.py
+++ b/db/schemas/operations/select.py
@@ -2,11 +2,12 @@
from sqlalchemy import MetaData, select, and_, not_, or_, Table
+from db import constants
from db import types
-
TYPES_SCHEMA = types.base.SCHEMA
-EXCLUDED_SCHEMATA = [TYPES_SCHEMA, "information_schema"]
+TEMP_INFER_SCHEMA = constants.INFERENCE_SCHEMA
+EXCLUDED_SCHEMATA = [TYPES_SCHEMA, TEMP_INFER_SCHEMA, "information_schema"]
def reflect_schema(engine, name=None, oid=None):
diff --git a/db/tables/operations/infer_types.py b/db/tables/operations/infer_types.py
--- a/db/tables/operations/infer_types.py
+++ b/db/tables/operations/infer_types.py
@@ -10,7 +10,7 @@
from db.tables.operations.select import reflect_table
-TEMP_SCHEMA = f"{constants.MATHESAR_PREFIX}temp_schema"
+TEMP_SCHEMA = constants.INFERENCE_SCHEMA
TEMP_TABLE = f"{constants.MATHESAR_PREFIX}temp_table_%s"
| {"golden_diff": "diff --git a/db/constants.py b/db/constants.py\n--- a/db/constants.py\n+++ b/db/constants.py\n@@ -1,3 +1,4 @@\n MATHESAR_PREFIX = \"mathesar_\"\n ID = \"id\"\n ID_ORIGINAL = \"id_original\"\n+INFERENCE_SCHEMA = f\"{MATHESAR_PREFIX}inference_schema\"\ndiff --git a/db/schemas/operations/select.py b/db/schemas/operations/select.py\n--- a/db/schemas/operations/select.py\n+++ b/db/schemas/operations/select.py\n@@ -2,11 +2,12 @@\n \n from sqlalchemy import MetaData, select, and_, not_, or_, Table\n \n+from db import constants\n from db import types\n \n-\n TYPES_SCHEMA = types.base.SCHEMA\n-EXCLUDED_SCHEMATA = [TYPES_SCHEMA, \"information_schema\"]\n+TEMP_INFER_SCHEMA = constants.INFERENCE_SCHEMA\n+EXCLUDED_SCHEMATA = [TYPES_SCHEMA, TEMP_INFER_SCHEMA, \"information_schema\"]\n \n \n def reflect_schema(engine, name=None, oid=None):\ndiff --git a/db/tables/operations/infer_types.py b/db/tables/operations/infer_types.py\n--- a/db/tables/operations/infer_types.py\n+++ b/db/tables/operations/infer_types.py\n@@ -10,7 +10,7 @@\n from db.tables.operations.select import reflect_table\n \n \n-TEMP_SCHEMA = f\"{constants.MATHESAR_PREFIX}temp_schema\"\n+TEMP_SCHEMA = constants.INFERENCE_SCHEMA\n TEMP_TABLE = f\"{constants.MATHESAR_PREFIX}temp_table_%s\"\n", "issue": "`mathesar_temp_schema` should be hidden\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nCurrently, the system schema `mathesar_temp_schema` is returned as a standard schema, and ends up displayed as a result in the UI. This is confusing, since that schema is used for system operations, and shouldn't be available to the user.\r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nThe schema `mathesar_temp_schema` should be hidden.\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n\r\nAfter starting the service and doing type inference on at least one CSV loading into a table, go to `http://localhost:8000/api/v0/schemas/`. Note that `mathesar_temp_schema` will be one of the schemata in the `mathesar_tables` DB.\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\n\r\nWe're already hiding some schemata, e.g., `mathesar_types`. The implementer should figure out where the list of such schemata is, and add `mathesar_temp_schema` to that list.\n", "before_files": [{"content": "from time import time\n\nfrom sqlalchemy import select\n\nfrom db import constants\nfrom db.columns.base import MathesarColumn\nfrom db.columns.operations.infer_types import infer_column_type\nfrom db.schemas.operations.create import create_schema\nfrom db.tables.operations.create import CreateTableAs\nfrom db.tables.operations.select import reflect_table\n\n\nTEMP_SCHEMA = f\"{constants.MATHESAR_PREFIX}temp_schema\"\nTEMP_TABLE = f\"{constants.MATHESAR_PREFIX}temp_table_%s\"\n\n\ndef update_table_column_types(schema, table_name, engine):\n table = reflect_table(table_name, schema, engine)\n # we only want to infer (modify) the type of non-default columns\n inferable_column_names = (\n col.name for col in table.columns\n if not MathesarColumn.from_column(col).is_default\n and not col.primary_key\n and not col.foreign_keys\n )\n for column_name in inferable_column_names:\n infer_column_type(\n schema,\n table_name,\n column_name,\n engine,\n )\n\n\ndef infer_table_column_types(schema, table_name, engine):\n table = reflect_table(table_name, schema, engine)\n\n temp_name = TEMP_TABLE % (int(time()))\n create_schema(TEMP_SCHEMA, engine)\n with engine.begin() as conn:\n while engine.dialect.has_table(conn, temp_name, schema=TEMP_SCHEMA):\n temp_name = TEMP_TABLE.format(int(time()))\n\n full_temp_name = f\"{TEMP_SCHEMA}.{temp_name}\"\n\n select_table = select(table)\n with engine.begin() as conn:\n conn.execute(CreateTableAs(full_temp_name, select_table))\n temp_table = reflect_table(temp_name, TEMP_SCHEMA, engine)\n\n try:\n update_table_column_types(\n TEMP_SCHEMA, temp_table.name, engine,\n )\n except Exception as e:\n # Ensure the temp table is deleted\n temp_table.drop()\n raise e\n else:\n temp_table = reflect_table(temp_name, TEMP_SCHEMA, engine)\n types = [c.type.__class__ for c in temp_table.columns]\n temp_table.drop()\n return types\n", "path": "db/tables/operations/infer_types.py"}, {"content": "import warnings\n\nfrom sqlalchemy import MetaData, select, and_, not_, or_, Table\n\nfrom db import types\n\n\nTYPES_SCHEMA = types.base.SCHEMA\nEXCLUDED_SCHEMATA = [TYPES_SCHEMA, \"information_schema\"]\n\n\ndef reflect_schema(engine, name=None, oid=None):\n # If we have both arguments, the behavior is undefined.\n try:\n assert name is None or oid is None\n except AssertionError as e:\n raise e\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.oid, pg_namespace.c.nspname.label(\"name\"))\n .where(or_(pg_namespace.c.nspname == name, pg_namespace.c.oid == oid))\n )\n with engine.begin() as conn:\n schema_info = conn.execute(sel).fetchone()\n return schema_info\n\n\ndef get_mathesar_schemas_with_oids(engine):\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_namespace = Table(\"pg_namespace\", metadata, autoload_with=engine)\n sel = (\n select(pg_namespace.c.nspname.label('schema'), pg_namespace.c.oid)\n .where(\n and_(\n *[pg_namespace.c.nspname != schema for schema in EXCLUDED_SCHEMATA],\n not_(pg_namespace.c.nspname.like(\"pg_%\"))\n )\n )\n )\n with engine.begin() as conn:\n result = conn.execute(sel).fetchall()\n return result\n", "path": "db/schemas/operations/select.py"}, {"content": "MATHESAR_PREFIX = \"mathesar_\"\nID = \"id\"\nID_ORIGINAL = \"id_original\"\n", "path": "db/constants.py"}]} | 1,888 | 338 |
gh_patches_debug_2157 | rasdani/github-patches | git_diff | scipy__scipy-10353 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: interpolate.NearestNDInterpolator with pandas
interpolate.NearestNDInterpolator does not work as expected when used with selected pandas dataframe.
This is due to the index being maintained when making selections in pandas.
### Reproducing code example:
```
import numpy as np
import pandas as pd
from scipy import interpolate
df = pd.DataFrame(np.array([[0, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 1, 2]]).T, columns=['x', 'y', 'z'])
df_select = df[3:]
NI = interpolate.NearestNDInterpolator((df_select.x, df_select.y), df_select.z)
print(NI([0.1, 0.9], [0.1, 0.9]))
```
I expect [0, 2] to be output.
But output is [Nan, 0] as pandas.Series.
This is due to the index being maintained when making selections in pandas.
Specifically, `df_select.z` has index[3, 4, 5, 6].
But, self.tree.query (xi) line 81, in scipy/interpolate/ndgriddata.py returns a index that assumes that the index starts from zero.
So, self.tree.query (xi) return [0, 3]
Therefore, self.values[i] line 82, in scipy/interpolate/ndgriddata.py using Invalid index.
### Note
if case of
```
df_select = df[3:].reset_index()
```
or
```
NI = interpolate.NearestNDInterpolator((df_select.x, df_select.y), np.array(df_select.z))
```
it works as expected.
Also, this bug does not occur in interpolate.LinearNDInterpolator.
### Scipy/Numpy/Python version information:
```
1.3.0 1.16.4 sys.version_info(major=3, minor=6, micro=8, releaselevel='final', serial=0)
```
</issue>
<code>
[start of scipy/interpolate/ndgriddata.py]
1 """
2 Convenience interface to N-D interpolation
3
4 .. versionadded:: 0.9
5
6 """
7 from __future__ import division, print_function, absolute_import
8
9 import numpy as np
10 from .interpnd import LinearNDInterpolator, NDInterpolatorBase, \
11 CloughTocher2DInterpolator, _ndim_coords_from_arrays
12 from scipy.spatial import cKDTree
13
14 __all__ = ['griddata', 'NearestNDInterpolator', 'LinearNDInterpolator',
15 'CloughTocher2DInterpolator']
16
17 #------------------------------------------------------------------------------
18 # Nearest-neighbour interpolation
19 #------------------------------------------------------------------------------
20
21
22 class NearestNDInterpolator(NDInterpolatorBase):
23 """
24 NearestNDInterpolator(x, y)
25
26 Nearest-neighbour interpolation in N dimensions.
27
28 .. versionadded:: 0.9
29
30 Methods
31 -------
32 __call__
33
34 Parameters
35 ----------
36 x : (Npoints, Ndims) ndarray of floats
37 Data point coordinates.
38 y : (Npoints,) ndarray of float or complex
39 Data values.
40 rescale : boolean, optional
41 Rescale points to unit cube before performing interpolation.
42 This is useful if some of the input dimensions have
43 incommensurable units and differ by many orders of magnitude.
44
45 .. versionadded:: 0.14.0
46 tree_options : dict, optional
47 Options passed to the underlying ``cKDTree``.
48
49 .. versionadded:: 0.17.0
50
51
52 Notes
53 -----
54 Uses ``scipy.spatial.cKDTree``
55
56 """
57
58 def __init__(self, x, y, rescale=False, tree_options=None):
59 NDInterpolatorBase.__init__(self, x, y, rescale=rescale,
60 need_contiguous=False,
61 need_values=False)
62 if tree_options is None:
63 tree_options = dict()
64 self.tree = cKDTree(self.points, **tree_options)
65 self.values = y
66
67 def __call__(self, *args):
68 """
69 Evaluate interpolator at given points.
70
71 Parameters
72 ----------
73 xi : ndarray of float, shape (..., ndim)
74 Points where to interpolate data at.
75
76 """
77 xi = _ndim_coords_from_arrays(args, ndim=self.points.shape[1])
78 xi = self._check_call_shape(xi)
79 xi = self._scale_x(xi)
80 dist, i = self.tree.query(xi)
81 return self.values[i]
82
83
84 #------------------------------------------------------------------------------
85 # Convenience interface function
86 #------------------------------------------------------------------------------
87
88 def griddata(points, values, xi, method='linear', fill_value=np.nan,
89 rescale=False):
90 """
91 Interpolate unstructured D-dimensional data.
92
93 Parameters
94 ----------
95 points : 2-D ndarray of floats with shape (n, D), or length D tuple of 1-D ndarrays with shape (n,).
96 Data point coordinates.
97 values : ndarray of float or complex, shape (n,)
98 Data values.
99 xi : 2-D ndarray of floats with shape (m, D), or length D tuple of ndarrays broadcastable to the same shape.
100 Points at which to interpolate data.
101 method : {'linear', 'nearest', 'cubic'}, optional
102 Method of interpolation. One of
103
104 ``nearest``
105 return the value at the data point closest to
106 the point of interpolation. See `NearestNDInterpolator` for
107 more details.
108
109 ``linear``
110 tessellate the input point set to n-dimensional
111 simplices, and interpolate linearly on each simplex. See
112 `LinearNDInterpolator` for more details.
113
114 ``cubic`` (1-D)
115 return the value determined from a cubic
116 spline.
117
118 ``cubic`` (2-D)
119 return the value determined from a
120 piecewise cubic, continuously differentiable (C1), and
121 approximately curvature-minimizing polynomial surface. See
122 `CloughTocher2DInterpolator` for more details.
123 fill_value : float, optional
124 Value used to fill in for requested points outside of the
125 convex hull of the input points. If not provided, then the
126 default is ``nan``. This option has no effect for the
127 'nearest' method.
128 rescale : bool, optional
129 Rescale points to unit cube before performing interpolation.
130 This is useful if some of the input dimensions have
131 incommensurable units and differ by many orders of magnitude.
132
133 .. versionadded:: 0.14.0
134
135 Returns
136 -------
137 ndarray
138 Array of interpolated values.
139
140 Notes
141 -----
142
143 .. versionadded:: 0.9
144
145 Examples
146 --------
147
148 Suppose we want to interpolate the 2-D function
149
150 >>> def func(x, y):
151 ... return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
152
153 on a grid in [0, 1]x[0, 1]
154
155 >>> grid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]
156
157 but we only know its values at 1000 data points:
158
159 >>> points = np.random.rand(1000, 2)
160 >>> values = func(points[:,0], points[:,1])
161
162 This can be done with `griddata` -- below we try out all of the
163 interpolation methods:
164
165 >>> from scipy.interpolate import griddata
166 >>> grid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')
167 >>> grid_z1 = griddata(points, values, (grid_x, grid_y), method='linear')
168 >>> grid_z2 = griddata(points, values, (grid_x, grid_y), method='cubic')
169
170 One can see that the exact result is reproduced by all of the
171 methods to some degree, but for this smooth function the piecewise
172 cubic interpolant gives the best results:
173
174 >>> import matplotlib.pyplot as plt
175 >>> plt.subplot(221)
176 >>> plt.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower')
177 >>> plt.plot(points[:,0], points[:,1], 'k.', ms=1)
178 >>> plt.title('Original')
179 >>> plt.subplot(222)
180 >>> plt.imshow(grid_z0.T, extent=(0,1,0,1), origin='lower')
181 >>> plt.title('Nearest')
182 >>> plt.subplot(223)
183 >>> plt.imshow(grid_z1.T, extent=(0,1,0,1), origin='lower')
184 >>> plt.title('Linear')
185 >>> plt.subplot(224)
186 >>> plt.imshow(grid_z2.T, extent=(0,1,0,1), origin='lower')
187 >>> plt.title('Cubic')
188 >>> plt.gcf().set_size_inches(6, 6)
189 >>> plt.show()
190
191 """
192
193 points = _ndim_coords_from_arrays(points)
194
195 if points.ndim < 2:
196 ndim = points.ndim
197 else:
198 ndim = points.shape[-1]
199
200 if ndim == 1 and method in ('nearest', 'linear', 'cubic'):
201 from .interpolate import interp1d
202 points = points.ravel()
203 if isinstance(xi, tuple):
204 if len(xi) != 1:
205 raise ValueError("invalid number of dimensions in xi")
206 xi, = xi
207 # Sort points/values together, necessary as input for interp1d
208 idx = np.argsort(points)
209 points = points[idx]
210 values = values[idx]
211 if method == 'nearest':
212 fill_value = 'extrapolate'
213 ip = interp1d(points, values, kind=method, axis=0, bounds_error=False,
214 fill_value=fill_value)
215 return ip(xi)
216 elif method == 'nearest':
217 ip = NearestNDInterpolator(points, values, rescale=rescale)
218 return ip(xi)
219 elif method == 'linear':
220 ip = LinearNDInterpolator(points, values, fill_value=fill_value,
221 rescale=rescale)
222 return ip(xi)
223 elif method == 'cubic' and ndim == 2:
224 ip = CloughTocher2DInterpolator(points, values, fill_value=fill_value,
225 rescale=rescale)
226 return ip(xi)
227 else:
228 raise ValueError("Unknown interpolation method %r for "
229 "%d dimensional data" % (method, ndim))
230
[end of scipy/interpolate/ndgriddata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scipy/interpolate/ndgriddata.py b/scipy/interpolate/ndgriddata.py
--- a/scipy/interpolate/ndgriddata.py
+++ b/scipy/interpolate/ndgriddata.py
@@ -62,7 +62,7 @@
if tree_options is None:
tree_options = dict()
self.tree = cKDTree(self.points, **tree_options)
- self.values = y
+ self.values = np.asarray(y)
def __call__(self, *args):
"""
| {"golden_diff": "diff --git a/scipy/interpolate/ndgriddata.py b/scipy/interpolate/ndgriddata.py\n--- a/scipy/interpolate/ndgriddata.py\n+++ b/scipy/interpolate/ndgriddata.py\n@@ -62,7 +62,7 @@\n if tree_options is None:\n tree_options = dict()\n self.tree = cKDTree(self.points, **tree_options)\n- self.values = y\n+ self.values = np.asarray(y)\n \n def __call__(self, *args):\n \"\"\"\n", "issue": "BUG: interpolate.NearestNDInterpolator with pandas\ninterpolate.NearestNDInterpolator does not work as expected when used with selected pandas dataframe.\r\nThis is due to the index being maintained when making selections in pandas.\r\n\r\n### Reproducing code example:\r\n```\r\nimport numpy as np\r\nimport pandas as pd\r\nfrom scipy import interpolate\r\n\r\ndf = pd.DataFrame(np.array([[0, 0, 0, 0, 1, 0, 1],\r\n [0, 0, 0, 0, 0, 1, 1],\r\n [0, 0, 0, 0, 1, 1, 2]]).T, columns=['x', 'y', 'z'])\r\ndf_select = df[3:]\r\nNI = interpolate.NearestNDInterpolator((df_select.x, df_select.y), df_select.z)\r\nprint(NI([0.1, 0.9], [0.1, 0.9]))\r\n```\r\nI expect [0, 2] to be output.\r\nBut output is [Nan, 0] as pandas.Series.\r\n\r\nThis is due to the index being maintained when making selections in pandas.\r\nSpecifically, `df_select.z` has index[3, 4, 5, 6].\r\nBut, self.tree.query (xi) line 81, in scipy/interpolate/ndgriddata.py returns a index that assumes that the index starts from zero.\r\nSo, self.tree.query (xi) return [0, 3]\r\nTherefore, self.values[i] line 82, in scipy/interpolate/ndgriddata.py using Invalid index.\r\n\r\n### Note\r\nif case of\r\n```\r\ndf_select = df[3:].reset_index()\r\n```\r\nor\r\n```\r\nNI = interpolate.NearestNDInterpolator((df_select.x, df_select.y), np.array(df_select.z))\r\n```\r\nit works as expected.\r\n\r\nAlso, this bug does not occur in interpolate.LinearNDInterpolator.\r\n\r\n### Scipy/Numpy/Python version information:\r\n```\r\n1.3.0 1.16.4 sys.version_info(major=3, minor=6, micro=8, releaselevel='final', serial=0)\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nConvenience interface to N-D interpolation\n\n.. versionadded:: 0.9\n\n\"\"\"\nfrom __future__ import division, print_function, absolute_import\n\nimport numpy as np\nfrom .interpnd import LinearNDInterpolator, NDInterpolatorBase, \\\n CloughTocher2DInterpolator, _ndim_coords_from_arrays\nfrom scipy.spatial import cKDTree\n\n__all__ = ['griddata', 'NearestNDInterpolator', 'LinearNDInterpolator',\n 'CloughTocher2DInterpolator']\n\n#------------------------------------------------------------------------------\n# Nearest-neighbour interpolation\n#------------------------------------------------------------------------------\n\n\nclass NearestNDInterpolator(NDInterpolatorBase):\n \"\"\"\n NearestNDInterpolator(x, y)\n\n Nearest-neighbour interpolation in N dimensions.\n\n .. versionadded:: 0.9\n\n Methods\n -------\n __call__\n\n Parameters\n ----------\n x : (Npoints, Ndims) ndarray of floats\n Data point coordinates.\n y : (Npoints,) ndarray of float or complex\n Data values.\n rescale : boolean, optional\n Rescale points to unit cube before performing interpolation.\n This is useful if some of the input dimensions have\n incommensurable units and differ by many orders of magnitude.\n\n .. versionadded:: 0.14.0\n tree_options : dict, optional\n Options passed to the underlying ``cKDTree``.\n\n .. versionadded:: 0.17.0\n\n\n Notes\n -----\n Uses ``scipy.spatial.cKDTree``\n\n \"\"\"\n\n def __init__(self, x, y, rescale=False, tree_options=None):\n NDInterpolatorBase.__init__(self, x, y, rescale=rescale,\n need_contiguous=False,\n need_values=False)\n if tree_options is None:\n tree_options = dict()\n self.tree = cKDTree(self.points, **tree_options)\n self.values = y\n\n def __call__(self, *args):\n \"\"\"\n Evaluate interpolator at given points.\n\n Parameters\n ----------\n xi : ndarray of float, shape (..., ndim)\n Points where to interpolate data at.\n\n \"\"\"\n xi = _ndim_coords_from_arrays(args, ndim=self.points.shape[1])\n xi = self._check_call_shape(xi)\n xi = self._scale_x(xi)\n dist, i = self.tree.query(xi)\n return self.values[i]\n\n\n#------------------------------------------------------------------------------\n# Convenience interface function\n#------------------------------------------------------------------------------\n\ndef griddata(points, values, xi, method='linear', fill_value=np.nan,\n rescale=False):\n \"\"\"\n Interpolate unstructured D-dimensional data.\n\n Parameters\n ----------\n points : 2-D ndarray of floats with shape (n, D), or length D tuple of 1-D ndarrays with shape (n,).\n Data point coordinates. \n values : ndarray of float or complex, shape (n,)\n Data values.\n xi : 2-D ndarray of floats with shape (m, D), or length D tuple of ndarrays broadcastable to the same shape.\n Points at which to interpolate data.\n method : {'linear', 'nearest', 'cubic'}, optional\n Method of interpolation. One of\n\n ``nearest``\n return the value at the data point closest to\n the point of interpolation. See `NearestNDInterpolator` for\n more details.\n\n ``linear``\n tessellate the input point set to n-dimensional\n simplices, and interpolate linearly on each simplex. See\n `LinearNDInterpolator` for more details.\n\n ``cubic`` (1-D)\n return the value determined from a cubic\n spline.\n\n ``cubic`` (2-D)\n return the value determined from a\n piecewise cubic, continuously differentiable (C1), and\n approximately curvature-minimizing polynomial surface. See\n `CloughTocher2DInterpolator` for more details.\n fill_value : float, optional\n Value used to fill in for requested points outside of the\n convex hull of the input points. If not provided, then the\n default is ``nan``. This option has no effect for the\n 'nearest' method.\n rescale : bool, optional\n Rescale points to unit cube before performing interpolation.\n This is useful if some of the input dimensions have\n incommensurable units and differ by many orders of magnitude.\n\n .. versionadded:: 0.14.0\n \n Returns\n -------\n ndarray\n Array of interpolated values.\n\n Notes\n -----\n\n .. versionadded:: 0.9\n\n Examples\n --------\n\n Suppose we want to interpolate the 2-D function\n\n >>> def func(x, y):\n ... return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2\n\n on a grid in [0, 1]x[0, 1]\n\n >>> grid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]\n\n but we only know its values at 1000 data points:\n\n >>> points = np.random.rand(1000, 2)\n >>> values = func(points[:,0], points[:,1])\n\n This can be done with `griddata` -- below we try out all of the\n interpolation methods:\n\n >>> from scipy.interpolate import griddata\n >>> grid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')\n >>> grid_z1 = griddata(points, values, (grid_x, grid_y), method='linear')\n >>> grid_z2 = griddata(points, values, (grid_x, grid_y), method='cubic')\n\n One can see that the exact result is reproduced by all of the\n methods to some degree, but for this smooth function the piecewise\n cubic interpolant gives the best results:\n\n >>> import matplotlib.pyplot as plt\n >>> plt.subplot(221)\n >>> plt.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower')\n >>> plt.plot(points[:,0], points[:,1], 'k.', ms=1)\n >>> plt.title('Original')\n >>> plt.subplot(222)\n >>> plt.imshow(grid_z0.T, extent=(0,1,0,1), origin='lower')\n >>> plt.title('Nearest')\n >>> plt.subplot(223)\n >>> plt.imshow(grid_z1.T, extent=(0,1,0,1), origin='lower')\n >>> plt.title('Linear')\n >>> plt.subplot(224)\n >>> plt.imshow(grid_z2.T, extent=(0,1,0,1), origin='lower')\n >>> plt.title('Cubic')\n >>> plt.gcf().set_size_inches(6, 6)\n >>> plt.show()\n\n \"\"\"\n\n points = _ndim_coords_from_arrays(points)\n\n if points.ndim < 2:\n ndim = points.ndim\n else:\n ndim = points.shape[-1]\n\n if ndim == 1 and method in ('nearest', 'linear', 'cubic'):\n from .interpolate import interp1d\n points = points.ravel()\n if isinstance(xi, tuple):\n if len(xi) != 1:\n raise ValueError(\"invalid number of dimensions in xi\")\n xi, = xi\n # Sort points/values together, necessary as input for interp1d\n idx = np.argsort(points)\n points = points[idx]\n values = values[idx]\n if method == 'nearest':\n fill_value = 'extrapolate'\n ip = interp1d(points, values, kind=method, axis=0, bounds_error=False,\n fill_value=fill_value)\n return ip(xi)\n elif method == 'nearest':\n ip = NearestNDInterpolator(points, values, rescale=rescale)\n return ip(xi)\n elif method == 'linear':\n ip = LinearNDInterpolator(points, values, fill_value=fill_value,\n rescale=rescale)\n return ip(xi)\n elif method == 'cubic' and ndim == 2:\n ip = CloughTocher2DInterpolator(points, values, fill_value=fill_value,\n rescale=rescale)\n return ip(xi)\n else:\n raise ValueError(\"Unknown interpolation method %r for \"\n \"%d dimensional data\" % (method, ndim))\n", "path": "scipy/interpolate/ndgriddata.py"}]} | 3,425 | 120 |
gh_patches_debug_60593 | rasdani/github-patches | git_diff | pytorch__TensorRT-196 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🐛 [Bug] UnicodeDecodeError running setup.py
## Bug Description
Trying to run "python setup.py install" fails with a unicode error when reading README.md.
## To Reproduce
Steps to reproduce the behavior:
1. docker run --gpus=all -it nvcr.io/nvidia/tensorrt:20.03-py3 /bin/bash
2. (cd /usr/bin && wget -O bazel https://github.com/bazelbuild/bazelisk/releases/download/v1.7.3/bazelisk-linux-amd64 && chmod +x bazel)
3. git clone https://github.com/NVIDIA/TRTorch.git
4. cd TRTorch/py
5. pip install -r requirements.txt
6. python setup.py install
The error follows:
> root@320583666d0c:/workspace/TRTorch/py# python setup.py install
> Traceback (most recent call last):
> File "setup.py", line 194, in <module>
> long_description = fh.read()
> File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
> return codecs.ascii_decode(input, self.errors)[0]
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 7349: ordinal not in range(128)
## Expected behavior
No unicode error
## Environment
- PyTorch Version (e.g., 1.0): 1.6.0
- CPU Architecture: x86
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): python setup.py install
- Are you using local sources or building from archives: local sources (git clone)
- Python version: 3.6.9
- CUDA version: 10.2.89
- GPU models and configuration: gtx 970
## Additional context
The following appears to resolve the issue:
```
diff --git a/py/setup.py b/py/setup.py
index 53f85da..8344c0a 100644
--- a/py/setup.py
+++ b/py/setup.py
@@ -190,7 +190,7 @@ ext_modules = [
)
]
-with open("README.md", "r") as fh:
+with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
setup(
```
</issue>
<code>
[start of py/setup.py]
1 import os
2 import sys
3 import glob
4 import setuptools
5 from setuptools import setup, Extension, find_packages
6 from setuptools.command.build_ext import build_ext
7 from setuptools.command.develop import develop
8 from setuptools.command.install import install
9 from distutils.cmd import Command
10 from wheel.bdist_wheel import bdist_wheel
11
12 from torch.utils import cpp_extension
13 from shutil import copyfile, rmtree
14
15 import subprocess
16
17 dir_path = os.path.dirname(os.path.realpath(__file__))
18
19 __version__ = '0.1.0a0'
20
21 CXX11_ABI = False
22
23 if "--use-cxx11-abi" in sys.argv:
24 sys.argv.remove("--use-cxx11-abi")
25 CXX11_ABI = True
26
27 def which(program):
28 import os
29 def is_exe(fpath):
30 return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
31
32 fpath, fname = os.path.split(program)
33 if fpath:
34 if is_exe(program):
35 return program
36 else:
37 for path in os.environ["PATH"].split(os.pathsep):
38 exe_file = os.path.join(path, program)
39 if is_exe(exe_file):
40 return exe_file
41
42 return None
43
44 BAZEL_EXE = which("bazel")
45
46 def build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=False):
47 cmd = [BAZEL_EXE, "build"]
48 cmd.append("//cpp/api/lib:libtrtorch.so")
49 if develop:
50 cmd.append("--compilation_mode=dbg")
51 else:
52 cmd.append("--compilation_mode=opt")
53 if use_dist_dir:
54 cmd.append("--distdir=third_party/dist_dir/x86_64-linux-gnu")
55 if not cxx11_abi:
56 cmd.append("--config=python")
57 else:
58 print("using CXX11 ABI build")
59
60 print("building libtrtorch")
61 status_code = subprocess.run(cmd).returncode
62
63 if status_code != 0:
64 sys.exit(status_code)
65
66
67 def gen_version_file():
68 if not os.path.exists(dir_path + '/trtorch/_version.py'):
69 os.mknod(dir_path + '/trtorch/_version.py')
70
71 with open(dir_path + '/trtorch/_version.py', 'w') as f:
72 print("creating version file")
73 f.write("__version__ = \"" + __version__ + '\"')
74
75 def copy_libtrtorch(multilinux=False):
76 if not os.path.exists(dir_path + '/trtorch/lib'):
77 os.makedirs(dir_path + '/trtorch/lib')
78
79 print("copying library into module")
80 if multilinux:
81 copyfile(dir_path + "/build/libtrtorch_build/libtrtorch.so", dir_path + '/trtorch/lib/libtrtorch.so')
82 else:
83 copyfile(dir_path + "/../bazel-bin/cpp/api/lib/libtrtorch.so", dir_path + '/trtorch/lib/libtrtorch.so')
84
85 class DevelopCommand(develop):
86 description = "Builds the package and symlinks it into the PYTHONPATH"
87
88 def initialize_options(self):
89 develop.initialize_options(self)
90
91 def finalize_options(self):
92 develop.finalize_options(self)
93
94 def run(self):
95 global CXX11_ABI
96 build_libtrtorch_pre_cxx11_abi(develop=True, cxx11_abi=CXX11_ABI)
97 gen_version_file()
98 copy_libtrtorch()
99 develop.run(self)
100
101
102 class InstallCommand(install):
103 description = "Builds the package"
104
105 def initialize_options(self):
106 install.initialize_options(self)
107
108 def finalize_options(self):
109 install.finalize_options(self)
110
111 def run(self):
112 global CXX11_ABI
113 build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)
114 gen_version_file()
115 copy_libtrtorch()
116 install.run(self)
117
118 class BdistCommand(bdist_wheel):
119 description = "Builds the package"
120
121 def initialize_options(self):
122 bdist_wheel.initialize_options(self)
123
124 def finalize_options(self):
125 bdist_wheel.finalize_options(self)
126
127 def run(self):
128 global CXX11_ABI
129 build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)
130 gen_version_file()
131 copy_libtrtorch()
132 bdist_wheel.run(self)
133
134 class CleanCommand(Command):
135 """Custom clean command to tidy up the project root."""
136 PY_CLEAN_FILES = ['./build', './dist', './trtorch/__pycache__', './trtorch/lib', './*.pyc', './*.tgz', './*.egg-info']
137 description = "Command to tidy up the project root"
138 user_options = []
139
140 def initialize_options(self):
141 pass
142
143 def finalize_options(self):
144 pass
145
146 def run(self):
147 for path_spec in self.PY_CLEAN_FILES:
148 # Make paths absolute and relative to this path
149 abs_paths = glob.glob(os.path.normpath(os.path.join(dir_path, path_spec)))
150 for path in [str(p) for p in abs_paths]:
151 if not path.startswith(dir_path):
152 # Die if path in CLEAN_FILES is absolute + outside this directory
153 raise ValueError("%s is not a path inside %s" % (path, dir_path))
154 print('Removing %s' % os.path.relpath(path))
155 rmtree(path)
156
157 ext_modules = [
158 cpp_extension.CUDAExtension('trtorch._C',
159 [
160 'trtorch/csrc/trtorch_py.cpp',
161 'trtorch/csrc/tensorrt_backend.cpp',
162 'trtorch/csrc/tensorrt_classes.cpp',
163 'trtorch/csrc/register_tensorrt_classes.cpp',
164 ],
165 library_dirs=[
166 (dir_path + '/trtorch/lib/'),
167 "/opt/conda/lib/python3.6/config-3.6m-x86_64-linux-gnu"
168 ],
169 libraries=[
170 "trtorch"
171 ],
172 include_dirs=[
173 dir_path + "trtorch/csrc",
174 dir_path + "/../",
175 dir_path + "/../bazel-TRTorch/external/tensorrt/include",
176 ],
177 extra_compile_args=[
178 "-Wno-deprecated",
179 "-Wno-deprecated-declarations",
180 ] + (["-D_GLIBCXX_USE_CXX11_ABI=1"] if CXX11_ABI else ["-D_GLIBCXX_USE_CXX11_ABI=0"]),
181 extra_link_args=[
182 "-Wno-deprecated",
183 "-Wno-deprecated-declarations",
184 "-Wl,--no-as-needed",
185 "-ltrtorch",
186 "-Wl,-rpath,$ORIGIN/lib",
187 "-lpthread",
188 "-ldl",
189 "-lutil",
190 "-lrt",
191 "-lm",
192 "-Xlinker",
193 "-export-dynamic"
194 ] + (["-D_GLIBCXX_USE_CXX11_ABI=1"] if CXX11_ABI else ["-D_GLIBCXX_USE_CXX11_ABI=0"]),
195 undef_macros=[ "NDEBUG" ]
196 )
197 ]
198
199 with open("README.md", "r") as fh:
200 long_description = fh.read()
201
202 setup(
203 name='trtorch',
204 version=__version__,
205 author='NVIDIA',
206 author_email='[email protected]',
207 url='https://nvidia.github.io/TRTorch',
208 description='A compiler backend for PyTorch JIT targeting NVIDIA GPUs',
209 long_description_content_type='text/markdown',
210 long_description=long_description,
211 ext_modules=ext_modules,
212 install_requires=[
213 'torch==1.6.0',
214 ],
215 setup_requires=[],
216 cmdclass={
217 'install': InstallCommand,
218 'clean': CleanCommand,
219 'develop': DevelopCommand,
220 'build_ext': cpp_extension.BuildExtension,
221 'bdist_wheel': BdistCommand,
222 },
223 zip_safe=False,
224 license="BSD",
225 packages=find_packages(),
226 classifiers=[
227 "Development Status :: 4 - Beta",
228 "Environment :: GPU :: NVIDIA CUDA",
229 "License :: OSI Approved :: BSD License",
230 "Intended Audience :: Developers",
231 "Intended Audience :: Science/Research",
232 "Operating System :: POSIX :: Linux",
233 "Programming Language :: C++",
234 "Programming Language :: Python",
235 "Programming Language :: Python :: Implementation :: CPython",
236 "Topic :: Scientific/Engineering",
237 "Topic :: Scientific/Engineering :: Artificial Intelligence",
238 "Topic :: Software Development",
239 "Topic :: Software Development :: Libraries"
240 ],
241 python_requires='>=3.6',
242 include_package_data=True,
243 package_data={
244 'trtorch': ['lib/*.so'],
245 },
246 exclude_package_data={
247 '': ['*.cpp', '*.h'],
248 'trtorch': ['csrc/*.cpp'],
249 }
250 )
251
[end of py/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py/setup.py b/py/setup.py
--- a/py/setup.py
+++ b/py/setup.py
@@ -190,7 +190,7 @@
)
]
-with open("README.md", "r") as fh:
+with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
setup(
| {"golden_diff": "diff --git a/py/setup.py b/py/setup.py\n--- a/py/setup.py\n+++ b/py/setup.py\n@@ -190,7 +190,7 @@\n )\n ]\n \n-with open(\"README.md\", \"r\") as fh:\n+with open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n \n setup(\n", "issue": "\ud83d\udc1b [Bug] UnicodeDecodeError running setup.py\n## Bug Description\r\n\r\nTrying to run \"python setup.py install\" fails with a unicode error when reading README.md.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. docker run --gpus=all -it nvcr.io/nvidia/tensorrt:20.03-py3 /bin/bash\r\n2. (cd /usr/bin && wget -O bazel https://github.com/bazelbuild/bazelisk/releases/download/v1.7.3/bazelisk-linux-amd64 && chmod +x bazel)\r\n3. git clone https://github.com/NVIDIA/TRTorch.git\r\n4. cd TRTorch/py\r\n5. pip install -r requirements.txt\r\n6. python setup.py install\r\n\r\nThe error follows:\r\n> root@320583666d0c:/workspace/TRTorch/py# python setup.py install \r\n> Traceback (most recent call last):\r\n> File \"setup.py\", line 194, in <module>\r\n> long_description = fh.read()\r\n> File \"/usr/lib/python3.6/encodings/ascii.py\", line 26, in decode\r\n> return codecs.ascii_decode(input, self.errors)[0]\r\n> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 7349: ordinal not in range(128)\r\n\r\n## Expected behavior\r\n\r\nNo unicode error\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.0): 1.6.0\r\n - CPU Architecture: x86\r\n - OS (e.g., Linux): Linux\r\n - How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip\r\n - Build command you used (if compiling from source): python setup.py install\r\n - Are you using local sources or building from archives: local sources (git clone)\r\n - Python version: 3.6.9\r\n - CUDA version: 10.2.89\r\n - GPU models and configuration: gtx 970\r\n\r\n## Additional context\r\n\r\nThe following appears to resolve the issue:\r\n\r\n```\r\ndiff --git a/py/setup.py b/py/setup.py\r\nindex 53f85da..8344c0a 100644\r\n--- a/py/setup.py\r\n+++ b/py/setup.py\r\n@@ -190,7 +190,7 @@ ext_modules = [\r\n )\r\n ]\r\n \r\n-with open(\"README.md\", \"r\") as fh:\r\n+with open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\r\n long_description = fh.read()\r\n \r\n setup(\r\n```\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\nimport glob\nimport setuptools\nfrom setuptools import setup, Extension, find_packages\nfrom setuptools.command.build_ext import build_ext\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom distutils.cmd import Command\nfrom wheel.bdist_wheel import bdist_wheel\n\nfrom torch.utils import cpp_extension\nfrom shutil import copyfile, rmtree\n\nimport subprocess\n\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n__version__ = '0.1.0a0'\n\nCXX11_ABI = False\n\nif \"--use-cxx11-abi\" in sys.argv:\n sys.argv.remove(\"--use-cxx11-abi\")\n CXX11_ABI = True\n\ndef which(program):\n import os\n def is_exe(fpath):\n return os.path.isfile(fpath) and os.access(fpath, os.X_OK)\n\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n\n return None\n\nBAZEL_EXE = which(\"bazel\")\n\ndef build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=False):\n cmd = [BAZEL_EXE, \"build\"]\n cmd.append(\"//cpp/api/lib:libtrtorch.so\")\n if develop:\n cmd.append(\"--compilation_mode=dbg\")\n else:\n cmd.append(\"--compilation_mode=opt\")\n if use_dist_dir:\n cmd.append(\"--distdir=third_party/dist_dir/x86_64-linux-gnu\")\n if not cxx11_abi:\n cmd.append(\"--config=python\")\n else:\n print(\"using CXX11 ABI build\")\n\n print(\"building libtrtorch\")\n status_code = subprocess.run(cmd).returncode\n\n if status_code != 0:\n sys.exit(status_code)\n\n\ndef gen_version_file():\n if not os.path.exists(dir_path + '/trtorch/_version.py'):\n os.mknod(dir_path + '/trtorch/_version.py')\n\n with open(dir_path + '/trtorch/_version.py', 'w') as f:\n print(\"creating version file\")\n f.write(\"__version__ = \\\"\" + __version__ + '\\\"')\n\ndef copy_libtrtorch(multilinux=False):\n if not os.path.exists(dir_path + '/trtorch/lib'):\n os.makedirs(dir_path + '/trtorch/lib')\n\n print(\"copying library into module\")\n if multilinux:\n copyfile(dir_path + \"/build/libtrtorch_build/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n else:\n copyfile(dir_path + \"/../bazel-bin/cpp/api/lib/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n\nclass DevelopCommand(develop):\n description = \"Builds the package and symlinks it into the PYTHONPATH\"\n\n def initialize_options(self):\n develop.initialize_options(self)\n\n def finalize_options(self):\n develop.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=True, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n develop.run(self)\n\n\nclass InstallCommand(install):\n description = \"Builds the package\"\n\n def initialize_options(self):\n install.initialize_options(self)\n\n def finalize_options(self):\n install.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n install.run(self)\n\nclass BdistCommand(bdist_wheel):\n description = \"Builds the package\"\n\n def initialize_options(self):\n bdist_wheel.initialize_options(self)\n\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n bdist_wheel.run(self)\n\nclass CleanCommand(Command):\n \"\"\"Custom clean command to tidy up the project root.\"\"\"\n PY_CLEAN_FILES = ['./build', './dist', './trtorch/__pycache__', './trtorch/lib', './*.pyc', './*.tgz', './*.egg-info']\n description = \"Command to tidy up the project root\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n for path_spec in self.PY_CLEAN_FILES:\n # Make paths absolute and relative to this path\n abs_paths = glob.glob(os.path.normpath(os.path.join(dir_path, path_spec)))\n for path in [str(p) for p in abs_paths]:\n if not path.startswith(dir_path):\n # Die if path in CLEAN_FILES is absolute + outside this directory\n raise ValueError(\"%s is not a path inside %s\" % (path, dir_path))\n print('Removing %s' % os.path.relpath(path))\n rmtree(path)\n\next_modules = [\n cpp_extension.CUDAExtension('trtorch._C',\n [\n 'trtorch/csrc/trtorch_py.cpp',\n 'trtorch/csrc/tensorrt_backend.cpp',\n 'trtorch/csrc/tensorrt_classes.cpp',\n 'trtorch/csrc/register_tensorrt_classes.cpp',\n ],\n library_dirs=[\n (dir_path + '/trtorch/lib/'),\n \"/opt/conda/lib/python3.6/config-3.6m-x86_64-linux-gnu\"\n ],\n libraries=[\n \"trtorch\"\n ],\n include_dirs=[\n dir_path + \"trtorch/csrc\",\n dir_path + \"/../\",\n dir_path + \"/../bazel-TRTorch/external/tensorrt/include\",\n ],\n extra_compile_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n extra_link_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n \"-Wl,--no-as-needed\",\n \"-ltrtorch\",\n \"-Wl,-rpath,$ORIGIN/lib\",\n \"-lpthread\",\n \"-ldl\",\n \"-lutil\",\n \"-lrt\",\n \"-lm\",\n \"-Xlinker\",\n \"-export-dynamic\"\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n undef_macros=[ \"NDEBUG\" ]\n )\n]\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nsetup(\n name='trtorch',\n version=__version__,\n author='NVIDIA',\n author_email='[email protected]',\n url='https://nvidia.github.io/TRTorch',\n description='A compiler backend for PyTorch JIT targeting NVIDIA GPUs',\n long_description_content_type='text/markdown',\n long_description=long_description,\n ext_modules=ext_modules,\n install_requires=[\n 'torch==1.6.0',\n ],\n setup_requires=[],\n cmdclass={\n 'install': InstallCommand,\n 'clean': CleanCommand,\n 'develop': DevelopCommand,\n 'build_ext': cpp_extension.BuildExtension,\n 'bdist_wheel': BdistCommand,\n },\n zip_safe=False,\n license=\"BSD\",\n packages=find_packages(),\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: GPU :: NVIDIA CUDA\",\n \"License :: OSI Approved :: BSD License\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: C++\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\"\n ],\n python_requires='>=3.6',\n include_package_data=True,\n package_data={\n 'trtorch': ['lib/*.so'],\n },\n exclude_package_data={\n '': ['*.cpp', '*.h'],\n 'trtorch': ['csrc/*.cpp'],\n }\n)\n", "path": "py/setup.py"}]} | 3,664 | 83 |
gh_patches_debug_50801 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-6841 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Container: Regional Cluster support for GKE clusters
I'm unable to get or create regional clusters using the container_v1 client APIs. The [documentation](https://googleapis.github.io/google-cloud-python/latest/container/gapic/v1/api.html#google.cloud.container_v1.ClusterManagerClient.create_cluster) suggests that this is possible by using the `parent` parameter to describe the project/region to launch the cluster but I get the following errors:
```bash
(env) david@ ~ $ which python
~/dev/env/bin/python
(env) david@ ~ $ pip freeze
...
google-api-core==1.6.0
google-auth==1.6.1
google-cloud==0.34.0
google-cloud-container==0.1.1
googleapis-common-protos==1.5.5
grpcio==1.16.1
...
(env) david@ ~ $ python --version
Python 2.7.10
(env) david@ ~ $ python ./get_cluster.py
Traceback (most recent call last):
File "./get_cluster.py", line 6, in <module>
cluster = client.get_cluster(project_id=credentials.project_id, parent='projects/<project_id>/locations/us-east1', cluster_id='ha-cluster-1')
TypeError: get_cluster() got an unexpected keyword argument 'parent'
```
Is it possible that the API documentation has been updated before the feature was merged or is it more likely an environment issue on my end? Any insight into this would be appreciated
I have also looked at using the [google-api-python-client](https://github.com/googleapis/google-api-python-client#google-api-client) to launch regional clusters but I would prefer to use this library if the feature is supported. Are there any known workarounds for this?
</issue>
<code>
[start of container/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = 'google-cloud-container'
24 description = 'Google Container Engine API client library'
25 version = '0.1.1'
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = 'Development Status :: 3 - Alpha'
31 dependencies = [
32 'google-api-core[grpc] >= 1.6.0, < 2.0.0dev',
33 ]
34 extras = {
35 }
36
37
38 # Setup boilerplate below this line.
39
40 package_root = os.path.abspath(os.path.dirname(__file__))
41
42 readme_filename = os.path.join(package_root, 'README.rst')
43 with io.open(readme_filename, encoding='utf-8') as readme_file:
44 readme = readme_file.read()
45
46 # Only include packages under the 'google' namespace. Do not include tests,
47 # benchmarks, etc.
48 packages = [
49 package for package in setuptools.find_packages()
50 if package.startswith('google')]
51
52 # Determine which namespaces are needed.
53 namespaces = ['google']
54 if 'google.cloud' in packages:
55 namespaces.append('google.cloud')
56
57
58 setuptools.setup(
59 name=name,
60 version=version,
61 description=description,
62 long_description=readme,
63 author='Google LLC',
64 author_email='[email protected]',
65 license='Apache 2.0',
66 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
67 classifiers=[
68 release_status,
69 'Intended Audience :: Developers',
70 'License :: OSI Approved :: Apache Software License',
71 'Programming Language :: Python',
72 'Programming Language :: Python :: 2',
73 'Programming Language :: Python :: 2.7',
74 'Programming Language :: Python :: 3',
75 'Programming Language :: Python :: 3.4',
76 'Programming Language :: Python :: 3.5',
77 'Programming Language :: Python :: 3.6',
78 'Operating System :: OS Independent',
79 'Topic :: Internet',
80 ],
81 platforms='Posix; MacOS X; Windows',
82 packages=packages,
83 namespace_packages=namespaces,
84 install_requires=dependencies,
85 extras_require=extras,
86 include_package_data=True,
87 zip_safe=False,
88 )
89
[end of container/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/container/setup.py b/container/setup.py
--- a/container/setup.py
+++ b/container/setup.py
@@ -22,7 +22,7 @@
name = 'google-cloud-container'
description = 'Google Container Engine API client library'
-version = '0.1.1'
+version = '0.2.0'
# Should be one of:
# 'Development Status :: 3 - Alpha'
# 'Development Status :: 4 - Beta'
| {"golden_diff": "diff --git a/container/setup.py b/container/setup.py\n--- a/container/setup.py\n+++ b/container/setup.py\n@@ -22,7 +22,7 @@\n \n name = 'google-cloud-container'\n description = 'Google Container Engine API client library'\n-version = '0.1.1'\n+version = '0.2.0'\n # Should be one of:\n # 'Development Status :: 3 - Alpha'\n # 'Development Status :: 4 - Beta'\n", "issue": "Container: Regional Cluster support for GKE clusters\n\r\nI'm unable to get or create regional clusters using the container_v1 client APIs. The [documentation](https://googleapis.github.io/google-cloud-python/latest/container/gapic/v1/api.html#google.cloud.container_v1.ClusterManagerClient.create_cluster) suggests that this is possible by using the `parent` parameter to describe the project/region to launch the cluster but I get the following errors:\r\n\r\n```bash\r\n(env) david@ ~ $ which python\r\n~/dev/env/bin/python \r\n\r\n(env) david@ ~ $ pip freeze\r\n...\r\ngoogle-api-core==1.6.0\r\ngoogle-auth==1.6.1\r\ngoogle-cloud==0.34.0\r\ngoogle-cloud-container==0.1.1\r\ngoogleapis-common-protos==1.5.5\r\ngrpcio==1.16.1\r\n...\r\n\r\n(env) david@ ~ $ python --version\r\nPython 2.7.10\r\n\r\n(env) david@ ~ $ python ./get_cluster.py\r\nTraceback (most recent call last):\r\n File \"./get_cluster.py\", line 6, in <module>\r\n cluster = client.get_cluster(project_id=credentials.project_id, parent='projects/<project_id>/locations/us-east1', cluster_id='ha-cluster-1')\r\nTypeError: get_cluster() got an unexpected keyword argument 'parent'\r\n```\r\n \r\nIs it possible that the API documentation has been updated before the feature was merged or is it more likely an environment issue on my end? Any insight into this would be appreciated\r\n\r\nI have also looked at using the [google-api-python-client](https://github.com/googleapis/google-api-python-client#google-api-client) to launch regional clusters but I would prefer to use this library if the feature is supported. Are there any known workarounds for this?\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-container'\ndescription = 'Google Container Engine API client library'\nversion = '0.1.1'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 3 - Alpha'\ndependencies = [\n 'google-api-core[grpc] >= 1.6.0, < 2.0.0dev',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "container/setup.py"}]} | 1,698 | 99 |
gh_patches_debug_21296 | rasdani/github-patches | git_diff | open-mmlab__mmpose-258 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pylint: C0325
```bash
mmpose/core/evaluation/mesh_eval.py:27:0: C0325: Unnecessary parens after 'assert' keyword (superfluous-parens)
mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py:94:0: C0325: Unnecessary parens after 'assert' keyword (superfluous-parens)
```
</issue>
<code>
[start of mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py]
1 import os
2 from collections import OrderedDict
3
4 import json_tricks as json
5 import numpy as np
6
7 from mmpose.datasets.builder import DATASETS
8 from ....core.evaluation import compute_similarity_transform
9 from .mesh_base_dataset import MeshBaseDataset
10
11
12 @DATASETS.register_module()
13 class MeshH36MDataset(MeshBaseDataset):
14 """Human3.6M Dataset for 3D human mesh estimation. It inherits all function
15 from MeshBaseDataset and has its own evaluate fuction.
16
17 The dataset loads raw features and apply specified transforms
18 to return a dict containing the image tensors and other information.
19
20 Args:
21 ann_file (str): Path to the annotation file.
22 img_prefix (str): Path to a directory where images are held.
23 Default: None.
24 data_cfg (dict): config
25 pipeline (list[dict | callable]): A sequence of data transforms.
26 test_mode (bool): Store True when building test or
27 validation dataset. Default: False.
28 """
29
30 def evaluate(self, outputs, res_folder, metric='joint_error', logger=None):
31 """Evaluate 3D keypoint results."""
32 metrics = metric if isinstance(metric, list) else [metric]
33 allowed_metrics = ['joint_error']
34 for metric in metrics:
35 if metric not in allowed_metrics:
36 raise KeyError(f'metric {metric} is not supported')
37
38 res_file = os.path.join(res_folder, 'result_keypoints.json')
39 kpts = []
40 for preds, boxes, image_path in outputs:
41 kpts.append({
42 'keypoints': preds[0].tolist(),
43 'center': boxes[0][0:2].tolist(),
44 'scale': boxes[0][2:4].tolist(),
45 'area': float(boxes[0][4]),
46 'score': float(boxes[0][5]),
47 'image': image_path,
48 })
49
50 self._write_keypoint_results(kpts, res_file)
51 info_str = self._report_metric(res_file)
52 name_value = OrderedDict(info_str)
53 return name_value
54
55 def _write_keypoint_results(self, keypoints, res_file):
56 """Write results into a json file."""
57
58 with open(res_file, 'w') as f:
59 json.dump(keypoints, f, sort_keys=True, indent=4)
60
61 def _report_metric(self, res_file):
62 """Keypoint evaluation.
63
64 Report mean per joint position error (MPJPE) and mean per joint
65 position error after rigid alignment (MPJPE-PA)
66 """
67
68 with open(res_file, 'r') as fin:
69 preds = json.load(fin)
70 assert len(preds) == len(self.db)
71
72 joint_error = []
73 joint_error_pa = []
74
75 for pred, item in zip(preds, self.db):
76 error, error_pa = self.evaluate_kernel(pred['keypoints'][0],
77 item['joints_3d'],
78 item['joints_3d_visible'])
79 joint_error.append(error)
80 joint_error_pa.append(error_pa)
81
82 mpjpe = np.array(joint_error).mean()
83 mpjpe_pa = np.array(joint_error_pa).mean()
84
85 info_str = []
86 info_str.append(('MPJPE', mpjpe * 1000))
87 info_str.append(('MPJPE-PA', mpjpe_pa * 1000))
88 return info_str
89
90 def evaluate_kernel(self, pred_joints_3d, joints_3d, joints_3d_visible):
91 """Evaluate one example."""
92 # Only 14 lsp joints are used for evaluation
93 joint_mapper = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 18]
94 assert (joints_3d_visible[joint_mapper].min() > 0)
95
96 pred_joints_3d = np.array(pred_joints_3d)
97 pred_joints_3d = pred_joints_3d[joint_mapper, :]
98 pred_pelvis = (pred_joints_3d[[2]] + pred_joints_3d[[3]]) / 2
99 pred_joints_3d = pred_joints_3d - pred_pelvis
100
101 gt_joints_3d = joints_3d[joint_mapper, :]
102 gt_pelvis = (gt_joints_3d[[2]] + gt_joints_3d[[3]]) / 2
103 gt_joints_3d = gt_joints_3d - gt_pelvis
104
105 error = pred_joints_3d - gt_joints_3d
106 error = np.linalg.norm(error, ord=2, axis=-1).mean(axis=-1)
107
108 pred_joints_3d_aligned = compute_similarity_transform(
109 pred_joints_3d, gt_joints_3d)
110 error_pa = pred_joints_3d_aligned - gt_joints_3d
111 error_pa = np.linalg.norm(error_pa, ord=2, axis=-1).mean(axis=-1)
112
113 return error, error_pa
114
[end of mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py]
[start of mmpose/core/evaluation/mesh_eval.py]
1 # ------------------------------------------------------------------------------
2 # Adapted from https://github.com/akanazawa/hmr
3 # Original licence: Copyright (c) 2018 akanazawa, under the MIT License.
4 # ------------------------------------------------------------------------------
5
6 import numpy as np
7
8
9 def compute_similarity_transform(source_points, target_points):
10 """Computes a similarity transform (sR, t) that takes a set of 3D points
11 source_points (N x 3) closest to a set of 3D points target_points, where R
12 is an 3x3 rotation matrix, t 3x1 translation, s scale. And return the
13 transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal
14 Procrutes problem.
15
16 Notes:
17 Points number: N
18
19 Args:
20 source_points (np.ndarray([N, 3])): Source point set.
21 target_points (np.ndarray([N, 3])): Target point set.
22
23 Returns:
24 source_points_hat (np.ndarray([N, 3])): Transformed source point set.
25 """
26
27 assert (target_points.shape[0] == source_points.shape[0])
28 assert (target_points.shape[1] == 3 and source_points.shape[1] == 3)
29
30 source_points = source_points.T
31 target_points = target_points.T
32
33 # 1. Remove mean.
34 mu1 = source_points.mean(axis=1, keepdims=True)
35 mu2 = target_points.mean(axis=1, keepdims=True)
36 X1 = source_points - mu1
37 X2 = target_points - mu2
38
39 # 2. Compute variance of X1 used for scale.
40 var1 = np.sum(X1**2)
41
42 # 3. The outer product of X1 and X2.
43 K = X1.dot(X2.T)
44
45 # 4. Solution that Maximizes trace(R'K) is R=U*V', where U, V are
46 # singular vectors of K.
47 U, _, Vh = np.linalg.svd(K)
48 V = Vh.T
49 # Construct Z that fixes the orientation of R to get det(R)=1.
50 Z = np.eye(U.shape[0])
51 Z[-1, -1] *= np.sign(np.linalg.det(U.dot(V.T)))
52 # Construct R.
53 R = V.dot(Z.dot(U.T))
54
55 # 5. Recover scale.
56 scale = np.trace(R.dot(K)) / var1
57
58 # 6. Recover translation.
59 t = mu2 - scale * (R.dot(mu1))
60
61 # 7. Transform the source points:
62 source_points_hat = scale * R.dot(source_points) + t
63
64 source_points_hat = source_points_hat.T
65
66 return source_points_hat
67
[end of mmpose/core/evaluation/mesh_eval.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmpose/core/evaluation/mesh_eval.py b/mmpose/core/evaluation/mesh_eval.py
--- a/mmpose/core/evaluation/mesh_eval.py
+++ b/mmpose/core/evaluation/mesh_eval.py
@@ -24,8 +24,8 @@
source_points_hat (np.ndarray([N, 3])): Transformed source point set.
"""
- assert (target_points.shape[0] == source_points.shape[0])
- assert (target_points.shape[1] == 3 and source_points.shape[1] == 3)
+ assert target_points.shape[0] == source_points.shape[0]
+ assert target_points.shape[1] == 3 and source_points.shape[1] == 3
source_points = source_points.T
target_points = target_points.T
diff --git a/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py b/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py
--- a/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py
+++ b/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py
@@ -91,7 +91,7 @@
"""Evaluate one example."""
# Only 14 lsp joints are used for evaluation
joint_mapper = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 18]
- assert (joints_3d_visible[joint_mapper].min() > 0)
+ assert joints_3d_visible[joint_mapper].min() > 0
pred_joints_3d = np.array(pred_joints_3d)
pred_joints_3d = pred_joints_3d[joint_mapper, :]
| {"golden_diff": "diff --git a/mmpose/core/evaluation/mesh_eval.py b/mmpose/core/evaluation/mesh_eval.py\n--- a/mmpose/core/evaluation/mesh_eval.py\n+++ b/mmpose/core/evaluation/mesh_eval.py\n@@ -24,8 +24,8 @@\n source_points_hat (np.ndarray([N, 3])): Transformed source point set.\n \"\"\"\n \n- assert (target_points.shape[0] == source_points.shape[0])\n- assert (target_points.shape[1] == 3 and source_points.shape[1] == 3)\n+ assert target_points.shape[0] == source_points.shape[0]\n+ assert target_points.shape[1] == 3 and source_points.shape[1] == 3\n \n source_points = source_points.T\n target_points = target_points.T\ndiff --git a/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py b/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py\n--- a/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py\n+++ b/mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py\n@@ -91,7 +91,7 @@\n \"\"\"Evaluate one example.\"\"\"\n # Only 14 lsp joints are used for evaluation\n joint_mapper = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 18]\n- assert (joints_3d_visible[joint_mapper].min() > 0)\n+ assert joints_3d_visible[joint_mapper].min() > 0\n \n pred_joints_3d = np.array(pred_joints_3d)\n pred_joints_3d = pred_joints_3d[joint_mapper, :]\n", "issue": "Pylint: C0325\n```bash\r\nmmpose/core/evaluation/mesh_eval.py:27:0: C0325: Unnecessary parens after 'assert' keyword (superfluous-parens)\r\nmmpose/datasets/datasets/mesh/mesh_h36m_dataset.py:94:0: C0325: Unnecessary parens after 'assert' keyword (superfluous-parens)\r\n```\n", "before_files": [{"content": "import os\nfrom collections import OrderedDict\n\nimport json_tricks as json\nimport numpy as np\n\nfrom mmpose.datasets.builder import DATASETS\nfrom ....core.evaluation import compute_similarity_transform\nfrom .mesh_base_dataset import MeshBaseDataset\n\n\[email protected]_module()\nclass MeshH36MDataset(MeshBaseDataset):\n \"\"\"Human3.6M Dataset for 3D human mesh estimation. It inherits all function\n from MeshBaseDataset and has its own evaluate fuction.\n\n The dataset loads raw features and apply specified transforms\n to return a dict containing the image tensors and other information.\n\n Args:\n ann_file (str): Path to the annotation file.\n img_prefix (str): Path to a directory where images are held.\n Default: None.\n data_cfg (dict): config\n pipeline (list[dict | callable]): A sequence of data transforms.\n test_mode (bool): Store True when building test or\n validation dataset. Default: False.\n \"\"\"\n\n def evaluate(self, outputs, res_folder, metric='joint_error', logger=None):\n \"\"\"Evaluate 3D keypoint results.\"\"\"\n metrics = metric if isinstance(metric, list) else [metric]\n allowed_metrics = ['joint_error']\n for metric in metrics:\n if metric not in allowed_metrics:\n raise KeyError(f'metric {metric} is not supported')\n\n res_file = os.path.join(res_folder, 'result_keypoints.json')\n kpts = []\n for preds, boxes, image_path in outputs:\n kpts.append({\n 'keypoints': preds[0].tolist(),\n 'center': boxes[0][0:2].tolist(),\n 'scale': boxes[0][2:4].tolist(),\n 'area': float(boxes[0][4]),\n 'score': float(boxes[0][5]),\n 'image': image_path,\n })\n\n self._write_keypoint_results(kpts, res_file)\n info_str = self._report_metric(res_file)\n name_value = OrderedDict(info_str)\n return name_value\n\n def _write_keypoint_results(self, keypoints, res_file):\n \"\"\"Write results into a json file.\"\"\"\n\n with open(res_file, 'w') as f:\n json.dump(keypoints, f, sort_keys=True, indent=4)\n\n def _report_metric(self, res_file):\n \"\"\"Keypoint evaluation.\n\n Report mean per joint position error (MPJPE) and mean per joint\n position error after rigid alignment (MPJPE-PA)\n \"\"\"\n\n with open(res_file, 'r') as fin:\n preds = json.load(fin)\n assert len(preds) == len(self.db)\n\n joint_error = []\n joint_error_pa = []\n\n for pred, item in zip(preds, self.db):\n error, error_pa = self.evaluate_kernel(pred['keypoints'][0],\n item['joints_3d'],\n item['joints_3d_visible'])\n joint_error.append(error)\n joint_error_pa.append(error_pa)\n\n mpjpe = np.array(joint_error).mean()\n mpjpe_pa = np.array(joint_error_pa).mean()\n\n info_str = []\n info_str.append(('MPJPE', mpjpe * 1000))\n info_str.append(('MPJPE-PA', mpjpe_pa * 1000))\n return info_str\n\n def evaluate_kernel(self, pred_joints_3d, joints_3d, joints_3d_visible):\n \"\"\"Evaluate one example.\"\"\"\n # Only 14 lsp joints are used for evaluation\n joint_mapper = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 18]\n assert (joints_3d_visible[joint_mapper].min() > 0)\n\n pred_joints_3d = np.array(pred_joints_3d)\n pred_joints_3d = pred_joints_3d[joint_mapper, :]\n pred_pelvis = (pred_joints_3d[[2]] + pred_joints_3d[[3]]) / 2\n pred_joints_3d = pred_joints_3d - pred_pelvis\n\n gt_joints_3d = joints_3d[joint_mapper, :]\n gt_pelvis = (gt_joints_3d[[2]] + gt_joints_3d[[3]]) / 2\n gt_joints_3d = gt_joints_3d - gt_pelvis\n\n error = pred_joints_3d - gt_joints_3d\n error = np.linalg.norm(error, ord=2, axis=-1).mean(axis=-1)\n\n pred_joints_3d_aligned = compute_similarity_transform(\n pred_joints_3d, gt_joints_3d)\n error_pa = pred_joints_3d_aligned - gt_joints_3d\n error_pa = np.linalg.norm(error_pa, ord=2, axis=-1).mean(axis=-1)\n\n return error, error_pa\n", "path": "mmpose/datasets/datasets/mesh/mesh_h36m_dataset.py"}, {"content": "# ------------------------------------------------------------------------------\n# Adapted from https://github.com/akanazawa/hmr\n# Original licence: Copyright (c) 2018 akanazawa, under the MIT License.\n# ------------------------------------------------------------------------------\n\nimport numpy as np\n\n\ndef compute_similarity_transform(source_points, target_points):\n \"\"\"Computes a similarity transform (sR, t) that takes a set of 3D points\n source_points (N x 3) closest to a set of 3D points target_points, where R\n is an 3x3 rotation matrix, t 3x1 translation, s scale. And return the\n transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal\n Procrutes problem.\n\n Notes:\n Points number: N\n\n Args:\n source_points (np.ndarray([N, 3])): Source point set.\n target_points (np.ndarray([N, 3])): Target point set.\n\n Returns:\n source_points_hat (np.ndarray([N, 3])): Transformed source point set.\n \"\"\"\n\n assert (target_points.shape[0] == source_points.shape[0])\n assert (target_points.shape[1] == 3 and source_points.shape[1] == 3)\n\n source_points = source_points.T\n target_points = target_points.T\n\n # 1. Remove mean.\n mu1 = source_points.mean(axis=1, keepdims=True)\n mu2 = target_points.mean(axis=1, keepdims=True)\n X1 = source_points - mu1\n X2 = target_points - mu2\n\n # 2. Compute variance of X1 used for scale.\n var1 = np.sum(X1**2)\n\n # 3. The outer product of X1 and X2.\n K = X1.dot(X2.T)\n\n # 4. Solution that Maximizes trace(R'K) is R=U*V', where U, V are\n # singular vectors of K.\n U, _, Vh = np.linalg.svd(K)\n V = Vh.T\n # Construct Z that fixes the orientation of R to get det(R)=1.\n Z = np.eye(U.shape[0])\n Z[-1, -1] *= np.sign(np.linalg.det(U.dot(V.T)))\n # Construct R.\n R = V.dot(Z.dot(U.T))\n\n # 5. Recover scale.\n scale = np.trace(R.dot(K)) / var1\n\n # 6. Recover translation.\n t = mu2 - scale * (R.dot(mu1))\n\n # 7. Transform the source points:\n source_points_hat = scale * R.dot(source_points) + t\n\n source_points_hat = source_points_hat.T\n\n return source_points_hat\n", "path": "mmpose/core/evaluation/mesh_eval.py"}]} | 2,766 | 418 |
gh_patches_debug_1513 | rasdani/github-patches | git_diff | searx__searx-1093 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] Python 3.6 Autocomplete not work
Use searx with archlinux and python 3.6 [https://aur.archlinux.org/packages/searx-py3](https://aur.archlinux.org/packages/searx-py3)
Autocomplete not working, log :
```
rv = self.dispatch_request()
File "/usr/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/lib/python3.6/site-packages/searx/webapp.py", line 609, in autocompleter
raw_text_query.parse_query()
File "/usr/lib/python3.6/site-packages/searx/query.py", line 55, in parse_query
raw_query_parts = re.split(r'(\s+)', self.query)
File "/usr/lib/python3.6/re.py", line 212, in split
return _compile(pattern, flags).split(string, maxsplit)
TypeError: cannot use a string pattern on a bytes-like object
```
</issue>
<code>
[start of searx/query.py]
1 #!/usr/bin/env python
2
3 '''
4 searx is free software: you can redistribute it and/or modify
5 it under the terms of the GNU Affero General Public License as published by
6 the Free Software Foundation, either version 3 of the License, or
7 (at your option) any later version.
8
9 searx is distributed in the hope that it will be useful,
10 but WITHOUT ANY WARRANTY; without even the implied warranty of
11 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 GNU Affero General Public License for more details.
13
14 You should have received a copy of the GNU Affero General Public License
15 along with searx. If not, see < http://www.gnu.org/licenses/ >.
16
17 (C) 2014 by Thomas Pointhuber, <[email protected]>
18 '''
19
20 from searx.languages import language_codes
21 from searx.engines import (
22 categories, engines, engine_shortcuts
23 )
24 import re
25 import sys
26
27 if sys.version_info[0] == 3:
28 unicode = str
29
30 VALID_LANGUAGE_CODE = re.compile(r'^[a-z]{2,3}(-[a-zA-Z]{2})?$')
31
32
33 class RawTextQuery(object):
34 """parse raw text query (the value from the html input)"""
35
36 def __init__(self, query, disabled_engines):
37 self.query = query
38 self.disabled_engines = []
39
40 if disabled_engines:
41 self.disabled_engines = disabled_engines
42
43 self.query_parts = []
44 self.engines = []
45 self.languages = []
46 self.specific = False
47
48 # parse query, if tags are set, which
49 # change the serch engine or search-language
50 def parse_query(self):
51 self.query_parts = []
52
53 # split query, including whitespaces
54 raw_query_parts = re.split(r'(\s+)', self.query)
55
56 parse_next = True
57
58 for query_part in raw_query_parts:
59 if not parse_next:
60 self.query_parts[-1] += query_part
61 continue
62
63 parse_next = False
64
65 # part does only contain spaces, skip
66 if query_part.isspace()\
67 or query_part == '':
68 parse_next = True
69 self.query_parts.append(query_part)
70 continue
71
72 # this force a language
73 if query_part[0] == ':':
74 lang = query_part[1:].lower().replace('_', '-')
75
76 # user may set a valid, yet not selectable language
77 if VALID_LANGUAGE_CODE.match(lang):
78 self.languages.append(lang)
79 parse_next = True
80
81 # check if any language-code is equal with
82 # declared language-codes
83 for lc in language_codes:
84 lang_id, lang_name, country, english_name = map(unicode.lower, lc)
85
86 # if correct language-code is found
87 # set it as new search-language
88 if lang == lang_id\
89 or lang_id.startswith(lang)\
90 or lang == lang_name\
91 or lang == english_name\
92 or lang.replace('-', ' ') == country:
93 parse_next = True
94 self.languages.append(lang_id)
95 # to ensure best match (first match is not necessarily the best one)
96 if lang == lang_id:
97 break
98
99 # this force a engine or category
100 if query_part[0] == '!' or query_part[0] == '?':
101 prefix = query_part[1:].replace('-', ' ').replace('_', ' ')
102
103 # check if prefix is equal with engine shortcut
104 if prefix in engine_shortcuts:
105 parse_next = True
106 self.engines.append({'category': 'none',
107 'name': engine_shortcuts[prefix]})
108
109 # check if prefix is equal with engine name
110 elif prefix in engines:
111 parse_next = True
112 self.engines.append({'category': 'none',
113 'name': prefix})
114
115 # check if prefix is equal with categorie name
116 elif prefix in categories:
117 # using all engines for that search, which
118 # are declared under that categorie name
119 parse_next = True
120 self.engines.extend({'category': prefix,
121 'name': engine.name}
122 for engine in categories[prefix]
123 if (engine.name, prefix) not in self.disabled_engines)
124
125 if query_part[0] == '!':
126 self.specific = True
127
128 # append query part to query_part list
129 self.query_parts.append(query_part)
130
131 def changeSearchQuery(self, search_query):
132 if len(self.query_parts):
133 self.query_parts[-1] = search_query
134 else:
135 self.query_parts.append(search_query)
136
137 def getSearchQuery(self):
138 if len(self.query_parts):
139 return self.query_parts[-1]
140 else:
141 return ''
142
143 def getFullQuery(self):
144 # get full querry including whitespaces
145 return u''.join(self.query_parts)
146
147
148 class SearchQuery(object):
149 """container for all the search parameters (query, language, etc...)"""
150
151 def __init__(self, query, engines, categories, lang, safesearch, pageno, time_range):
152 self.query = query.encode('utf-8')
153 self.engines = engines
154 self.categories = categories
155 self.lang = lang
156 self.safesearch = safesearch
157 self.pageno = pageno
158 self.time_range = time_range
159
160 def __str__(self):
161 return str(self.query) + ";" + str(self.engines)
162
[end of searx/query.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/query.py b/searx/query.py
--- a/searx/query.py
+++ b/searx/query.py
@@ -51,7 +51,7 @@
self.query_parts = []
# split query, including whitespaces
- raw_query_parts = re.split(r'(\s+)', self.query)
+ raw_query_parts = re.split(r'(\s+)' if isinstance(self.query, str) else b'(\s+)', self.query)
parse_next = True
| {"golden_diff": "diff --git a/searx/query.py b/searx/query.py\n--- a/searx/query.py\n+++ b/searx/query.py\n@@ -51,7 +51,7 @@\n self.query_parts = []\n \n # split query, including whitespaces\n- raw_query_parts = re.split(r'(\\s+)', self.query)\n+ raw_query_parts = re.split(r'(\\s+)' if isinstance(self.query, str) else b'(\\s+)', self.query)\n \n parse_next = True\n", "issue": "[bug] Python 3.6 Autocomplete not work\nUse searx with archlinux and python 3.6 [https://aur.archlinux.org/packages/searx-py3](https://aur.archlinux.org/packages/searx-py3)\r\nAutocomplete not working, log :\r\n```\r\n rv = self.dispatch_request()\r\n File \"/usr/lib/python3.6/site-packages/flask/app.py\", line 1598, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/usr/lib/python3.6/site-packages/searx/webapp.py\", line 609, in autocompleter\r\n raw_text_query.parse_query()\r\n File \"/usr/lib/python3.6/site-packages/searx/query.py\", line 55, in parse_query\r\n raw_query_parts = re.split(r'(\\s+)', self.query)\r\n File \"/usr/lib/python3.6/re.py\", line 212, in split\r\n return _compile(pattern, flags).split(string, maxsplit)\r\n TypeError: cannot use a string pattern on a bytes-like object\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\n'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2014 by Thomas Pointhuber, <[email protected]>\n'''\n\nfrom searx.languages import language_codes\nfrom searx.engines import (\n categories, engines, engine_shortcuts\n)\nimport re\nimport sys\n\nif sys.version_info[0] == 3:\n unicode = str\n\nVALID_LANGUAGE_CODE = re.compile(r'^[a-z]{2,3}(-[a-zA-Z]{2})?$')\n\n\nclass RawTextQuery(object):\n \"\"\"parse raw text query (the value from the html input)\"\"\"\n\n def __init__(self, query, disabled_engines):\n self.query = query\n self.disabled_engines = []\n\n if disabled_engines:\n self.disabled_engines = disabled_engines\n\n self.query_parts = []\n self.engines = []\n self.languages = []\n self.specific = False\n\n # parse query, if tags are set, which\n # change the serch engine or search-language\n def parse_query(self):\n self.query_parts = []\n\n # split query, including whitespaces\n raw_query_parts = re.split(r'(\\s+)', self.query)\n\n parse_next = True\n\n for query_part in raw_query_parts:\n if not parse_next:\n self.query_parts[-1] += query_part\n continue\n\n parse_next = False\n\n # part does only contain spaces, skip\n if query_part.isspace()\\\n or query_part == '':\n parse_next = True\n self.query_parts.append(query_part)\n continue\n\n # this force a language\n if query_part[0] == ':':\n lang = query_part[1:].lower().replace('_', '-')\n\n # user may set a valid, yet not selectable language\n if VALID_LANGUAGE_CODE.match(lang):\n self.languages.append(lang)\n parse_next = True\n\n # check if any language-code is equal with\n # declared language-codes\n for lc in language_codes:\n lang_id, lang_name, country, english_name = map(unicode.lower, lc)\n\n # if correct language-code is found\n # set it as new search-language\n if lang == lang_id\\\n or lang_id.startswith(lang)\\\n or lang == lang_name\\\n or lang == english_name\\\n or lang.replace('-', ' ') == country:\n parse_next = True\n self.languages.append(lang_id)\n # to ensure best match (first match is not necessarily the best one)\n if lang == lang_id:\n break\n\n # this force a engine or category\n if query_part[0] == '!' or query_part[0] == '?':\n prefix = query_part[1:].replace('-', ' ').replace('_', ' ')\n\n # check if prefix is equal with engine shortcut\n if prefix in engine_shortcuts:\n parse_next = True\n self.engines.append({'category': 'none',\n 'name': engine_shortcuts[prefix]})\n\n # check if prefix is equal with engine name\n elif prefix in engines:\n parse_next = True\n self.engines.append({'category': 'none',\n 'name': prefix})\n\n # check if prefix is equal with categorie name\n elif prefix in categories:\n # using all engines for that search, which\n # are declared under that categorie name\n parse_next = True\n self.engines.extend({'category': prefix,\n 'name': engine.name}\n for engine in categories[prefix]\n if (engine.name, prefix) not in self.disabled_engines)\n\n if query_part[0] == '!':\n self.specific = True\n\n # append query part to query_part list\n self.query_parts.append(query_part)\n\n def changeSearchQuery(self, search_query):\n if len(self.query_parts):\n self.query_parts[-1] = search_query\n else:\n self.query_parts.append(search_query)\n\n def getSearchQuery(self):\n if len(self.query_parts):\n return self.query_parts[-1]\n else:\n return ''\n\n def getFullQuery(self):\n # get full querry including whitespaces\n return u''.join(self.query_parts)\n\n\nclass SearchQuery(object):\n \"\"\"container for all the search parameters (query, language, etc...)\"\"\"\n\n def __init__(self, query, engines, categories, lang, safesearch, pageno, time_range):\n self.query = query.encode('utf-8')\n self.engines = engines\n self.categories = categories\n self.lang = lang\n self.safesearch = safesearch\n self.pageno = pageno\n self.time_range = time_range\n\n def __str__(self):\n return str(self.query) + \";\" + str(self.engines)\n", "path": "searx/query.py"}]} | 2,333 | 115 |
gh_patches_debug_60828 | rasdani/github-patches | git_diff | microsoft__AzureTRE-524 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Service bus message times out on deployment of workspace template
**Describe the bug**
When deploying a template that takes > 10 minutes, although deployment is successful the status is not updated.
**Steps to reproduce**
1. Register and deploy the `azureml_devtestlabs` workspace
2. Log on to the VMSS resource processor using bastion
3. View the docker logs, wait until deployment is complete, and see similar to:
`LinkDetach("ErrorCodes.LinkDetachForced: The link 'G3:5725658:sender-link-bd7b69d4-9ad4-4b9b-b9f6-2e311be400a3' is force detached. Code: publisher(link3135). Details: AmqpMessagePublisher.IdleTimerExpired: Idle timeout: 00:10:00.")`
</issue>
<code>
[start of processor_function/vm_porter/runner.py]
1 import os
2 import sys
3 import json
4 import socket
5 import asyncio
6 import logging
7 from shared.logging import disable_unwanted_loggers, initialize_logging # pylint: disable=import-error # noqa
8 from resources import strings # pylint: disable=import-error # noqa
9 from contextlib import asynccontextmanager
10 from azure.servicebus import ServiceBusMessage
11 from azure.servicebus.aio import ServiceBusClient, AutoLockRenewer
12 from azure.identity.aio import DefaultAzureCredential
13
14 logger_adapter = initialize_logging(logging.INFO, socket.gethostname())
15 disable_unwanted_loggers()
16
17
18 @asynccontextmanager
19 async def default_credentials(msi_id):
20 """
21 Context manager which yields the default credentials.
22 """
23 credential = DefaultAzureCredential(managed_identity_client_id=msi_id) if msi_id else DefaultAzureCredential()
24 yield credential
25 await credential.close()
26
27
28 async def receive_message(env_vars, service_bus_client):
29 """
30 This method is an async generator which receives messages from service bus
31 and yields those messages. If the yielded function return True the message is
32 marked complete.
33 """
34 async with service_bus_client:
35 q_name = env_vars["resource_request_queue"]
36 renewer = AutoLockRenewer()
37 receiver = service_bus_client.get_queue_receiver(queue_name=q_name, auto_lock_renewer=renewer)
38
39 async with receiver:
40 received_msgs = await receiver.receive_messages(max_message_count=10, max_wait_time=5)
41
42 for msg in received_msgs:
43 result = True
44 message = ""
45
46 try:
47 message = json.loads(str(msg))
48 result = (yield message)
49 except (json.JSONDecodeError) as e:
50 logging.error(f"Received bad service bus resource request message: {e}")
51 if result:
52 logging.info(f"Resource request for {message} is complete")
53 else:
54 logging.error('Message processing failed!')
55 logger_adapter.info(f"Message with id = {message['id']} processed as {result} and marked complete.")
56 await receiver.complete_message(msg)
57
58
59 def azure_login_command(env_vars):
60 local_login = f"az login --service-principal --username {env_vars['arm_client_id']} --password {env_vars['arm_client_secret']} --tenant {env_vars['arm_tenant_id']}"
61 vmss_login = f"az login --identity -u {env_vars['vmss_msi_id']}"
62 command = vmss_login if env_vars['vmss_msi_id'] else local_login
63 return command
64
65
66 def build_porter_command(msg_body, env_vars):
67 porter_parameters = ""
68 for parameter in msg_body['parameters']:
69 porter_parameters = porter_parameters + f" --param {parameter}={msg_body['parameters'][parameter]}"
70
71 installation_id = msg_body['parameters']['tre_id'] + "-" + msg_body['parameters']['workspace_id']
72
73 porter_parameters = porter_parameters + f" --param tfstate_container_name={env_vars['tfstate_container_name']}"
74 porter_parameters = porter_parameters + f" --param tfstate_resource_group_name={env_vars['tfstate_resource_group_name']}"
75 porter_parameters = porter_parameters + f" --param tfstate_storage_account_name={env_vars['tfstate_storage_account_name']}"
76 porter_parameters = porter_parameters + f" --param arm_use_msi={env_vars['arm_use_msi']}"
77
78 command_line = [f"{azure_login_command(env_vars)} && az acr login --name {env_vars['registry_server'].replace('.azurecr.io','')} && porter "
79 f"{msg_body['action']} {installation_id} "
80 f" --reference {env_vars['registry_server']}/{msg_body['name']}:v{msg_body['version']}"
81 f" {porter_parameters} --cred ./vm_porter/azure.json --allow-docker-host-access"
82 f" && porter show {installation_id}"]
83 return command_line
84
85
86 def porter_envs(env_var):
87 porter_env_vars = {}
88 porter_env_vars["HOME"] = os.environ['HOME']
89 porter_env_vars["PATH"] = os.environ['PATH']
90 porter_env_vars["ARM_CLIENT_ID"] = env_var["arm_client_id"]
91 porter_env_vars["ARM_CLIENT_SECRET"] = env_var["arm_client_secret"]
92 porter_env_vars["ARM_SUBSCRIPTION_ID"] = env_var["arm_subscription_id"]
93 porter_env_vars["ARM_TENANT_ID"] = env_var["arm_tenant_id"]
94
95 return porter_env_vars
96
97
98 async def run_porter(command, env_vars):
99 proc = await asyncio.create_subprocess_shell(
100 ''.join(command),
101 stdout=asyncio.subprocess.PIPE,
102 stderr=asyncio.subprocess.PIPE,
103 env=porter_envs(env_vars))
104
105 stdout, stderr = await proc.communicate()
106 logging.info(f'[{command!r} exited with {proc.returncode}]')
107 result_stdout = None
108 result_stderr = None
109 if stdout:
110 result_stdout = stdout.decode()
111 logger_adapter.info('[stdout]')
112 for string in result_stdout.split('\n'):
113 if len(string) != 0:
114 logger_adapter.info(str(string))
115 if stderr:
116 result_stderr = stderr.decode()
117 logger_adapter.info('[stderr]')
118 for string in result_stderr.split('\n'):
119 if len(string) != 0:
120 logger_adapter.info(str(string))
121
122 return (proc.returncode, result_stdout, result_stderr)
123
124
125 def service_bus_message_generator(sb_message, status, deployment_message):
126 installation_id = sb_message['parameters']['tre_id'] + "-" + sb_message['parameters']['workspace_id']
127 resource_request_message = json.dumps({
128 "id": sb_message["id"],
129 "status": status,
130 "message": f"{installation_id}: {deployment_message}"
131 })
132 return resource_request_message
133
134
135 async def deploy_porter_bundle(msg_body, sb_client, env_vars, message_logger_adapter):
136 installation_id = msg_body['parameters']['tre_id'] + "-" + msg_body['parameters']['workspace_id']
137 message_logger_adapter.info(f"{installation_id}: Deployment job configuration starting")
138 sb_sender = sb_client.get_queue_sender(queue_name=env_vars["deployment_status_queue"])
139 resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_DEPLOYING, "Deployment job starting")
140 await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body["id"]))
141
142 returncode, _, err = await run_porter(build_porter_command(msg_body, env_vars), env_vars)
143 if returncode != 0:
144 error_message = "Error context message = " + " ".join(err.split('\n'))
145 resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_FAILED, error_message)
146 await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body["id"]))
147 message_logger_adapter.info(f"{installation_id}: Deployment job configuration failed error = {error_message}")
148 return False
149 else:
150 success_message = "Workspace was deployed successfully..."
151 resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_DEPLOYED, success_message)
152 await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body["id"]))
153 message_logger_adapter.info(f"{installation_id}: {success_message}")
154 return True
155
156
157 async def runner(env_vars):
158 msi_id = env_vars["vmss_msi_id"]
159 service_bus_namespace = env_vars["service_bus_namespace"]
160 async with default_credentials(msi_id) as credential:
161 service_bus_client = ServiceBusClient(service_bus_namespace, credential)
162 logger_adapter.info("Starting message receiving loop...")
163 while True:
164 logger_adapter.info("Checking for new messages...")
165 receive_message_gen = receive_message(env_vars, service_bus_client)
166 try:
167 async for message in receive_message_gen:
168 logger_adapter.info(f"Message received for id={message['id']}")
169 message_logger_adapter = initialize_logging(logging.INFO, message['id'])
170 result = await deploy_porter_bundle(message, service_bus_client, env_vars, message_logger_adapter)
171 await receive_message_gen.asend(result)
172 except StopAsyncIteration: # the async generator when finished signals end with this exception.
173 pass
174 logger_adapter.info("All messages done sleeping...")
175 await asyncio.sleep(60)
176
177
178 def read_env_vars():
179 env_vars = {
180 # Needed for local dev
181 "app_id": os.environ.get("AZURE_CLIENT_ID", None),
182 "app_password": os.environ.get("AZURE_CLIENT_SECRET", None),
183
184 "registry_server": os.environ["REGISTRY_SERVER"],
185 "tfstate_container_name": os.environ['TERRAFORM_STATE_CONTAINER_NAME'],
186 "tfstate_resource_group_name": os.environ['MGMT_RESOURCE_GROUP_NAME'],
187 "tfstate_storage_account_name": os.environ['MGMT_STORAGE_ACCOUNT_NAME'],
188 "deployment_status_queue": os.environ['SERVICE_BUS_DEPLOYMENT_STATUS_UPDATE_QUEUE'],
189 "resource_request_queue": os.environ['SERVICE_BUS_RESOURCE_REQUEST_QUEUE'],
190 "service_bus_namespace": os.environ['SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE'],
191 "vmss_msi_id": os.environ.get('VMSS_MSI_ID', None),
192
193 # Needed for running porter
194 "arm_use_msi": os.environ["ARM_USE_MSI"],
195 "arm_subscription_id": os.environ['ARM_SUBSCRIPTION_ID'],
196 "arm_client_id": os.environ["ARM_CLIENT_ID"],
197 "arm_tenant_id": os.environ["ARM_TENANT_ID"]
198 }
199
200 env_vars["arm_client_secret"] = os.environ["ARM_CLIENT_SECRET"] if env_vars["arm_use_msi"] == "false" else ""
201
202 return env_vars
203
204
205 if __name__ == "__main__":
206 try:
207 env_vars = read_env_vars()
208 except KeyError as e:
209 logger_adapter.error(f"Environment variable {e} is not set correctly...Exiting")
210 sys.exit(1)
211 logger_adapter.info("Started processor")
212 asyncio.run(runner(env_vars))
213
[end of processor_function/vm_porter/runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/processor_function/vm_porter/runner.py b/processor_function/vm_porter/runner.py
--- a/processor_function/vm_porter/runner.py
+++ b/processor_function/vm_porter/runner.py
@@ -33,7 +33,7 @@
"""
async with service_bus_client:
q_name = env_vars["resource_request_queue"]
- renewer = AutoLockRenewer()
+ renewer = AutoLockRenewer(max_lock_renewal_duration=1800)
receiver = service_bus_client.get_queue_receiver(queue_name=q_name, auto_lock_renewer=renewer)
async with receiver:
| {"golden_diff": "diff --git a/processor_function/vm_porter/runner.py b/processor_function/vm_porter/runner.py\n--- a/processor_function/vm_porter/runner.py\n+++ b/processor_function/vm_porter/runner.py\n@@ -33,7 +33,7 @@\n \"\"\"\n async with service_bus_client:\n q_name = env_vars[\"resource_request_queue\"]\n- renewer = AutoLockRenewer()\n+ renewer = AutoLockRenewer(max_lock_renewal_duration=1800)\n receiver = service_bus_client.get_queue_receiver(queue_name=q_name, auto_lock_renewer=renewer)\n \n async with receiver:\n", "issue": "[BUG] Service bus message times out on deployment of workspace template \n**Describe the bug**\r\nWhen deploying a template that takes > 10 minutes, although deployment is successful the status is not updated.\r\n\r\n**Steps to reproduce**\r\n\r\n1. Register and deploy the `azureml_devtestlabs` workspace\r\n2. Log on to the VMSS resource processor using bastion\r\n3. View the docker logs, wait until deployment is complete, and see similar to:\r\n\r\n`LinkDetach(\"ErrorCodes.LinkDetachForced: The link 'G3:5725658:sender-link-bd7b69d4-9ad4-4b9b-b9f6-2e311be400a3' is force detached. Code: publisher(link3135). Details: AmqpMessagePublisher.IdleTimerExpired: Idle timeout: 00:10:00.\")`\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import os\nimport sys\nimport json\nimport socket\nimport asyncio\nimport logging\nfrom shared.logging import disable_unwanted_loggers, initialize_logging # pylint: disable=import-error # noqa\nfrom resources import strings # pylint: disable=import-error # noqa\nfrom contextlib import asynccontextmanager\nfrom azure.servicebus import ServiceBusMessage\nfrom azure.servicebus.aio import ServiceBusClient, AutoLockRenewer\nfrom azure.identity.aio import DefaultAzureCredential\n\nlogger_adapter = initialize_logging(logging.INFO, socket.gethostname())\ndisable_unwanted_loggers()\n\n\n@asynccontextmanager\nasync def default_credentials(msi_id):\n \"\"\"\n Context manager which yields the default credentials.\n \"\"\"\n credential = DefaultAzureCredential(managed_identity_client_id=msi_id) if msi_id else DefaultAzureCredential()\n yield credential\n await credential.close()\n\n\nasync def receive_message(env_vars, service_bus_client):\n \"\"\"\n This method is an async generator which receives messages from service bus\n and yields those messages. If the yielded function return True the message is\n marked complete.\n \"\"\"\n async with service_bus_client:\n q_name = env_vars[\"resource_request_queue\"]\n renewer = AutoLockRenewer()\n receiver = service_bus_client.get_queue_receiver(queue_name=q_name, auto_lock_renewer=renewer)\n\n async with receiver:\n received_msgs = await receiver.receive_messages(max_message_count=10, max_wait_time=5)\n\n for msg in received_msgs:\n result = True\n message = \"\"\n\n try:\n message = json.loads(str(msg))\n result = (yield message)\n except (json.JSONDecodeError) as e:\n logging.error(f\"Received bad service bus resource request message: {e}\")\n if result:\n logging.info(f\"Resource request for {message} is complete\")\n else:\n logging.error('Message processing failed!')\n logger_adapter.info(f\"Message with id = {message['id']} processed as {result} and marked complete.\")\n await receiver.complete_message(msg)\n\n\ndef azure_login_command(env_vars):\n local_login = f\"az login --service-principal --username {env_vars['arm_client_id']} --password {env_vars['arm_client_secret']} --tenant {env_vars['arm_tenant_id']}\"\n vmss_login = f\"az login --identity -u {env_vars['vmss_msi_id']}\"\n command = vmss_login if env_vars['vmss_msi_id'] else local_login\n return command\n\n\ndef build_porter_command(msg_body, env_vars):\n porter_parameters = \"\"\n for parameter in msg_body['parameters']:\n porter_parameters = porter_parameters + f\" --param {parameter}={msg_body['parameters'][parameter]}\"\n\n installation_id = msg_body['parameters']['tre_id'] + \"-\" + msg_body['parameters']['workspace_id']\n\n porter_parameters = porter_parameters + f\" --param tfstate_container_name={env_vars['tfstate_container_name']}\"\n porter_parameters = porter_parameters + f\" --param tfstate_resource_group_name={env_vars['tfstate_resource_group_name']}\"\n porter_parameters = porter_parameters + f\" --param tfstate_storage_account_name={env_vars['tfstate_storage_account_name']}\"\n porter_parameters = porter_parameters + f\" --param arm_use_msi={env_vars['arm_use_msi']}\"\n\n command_line = [f\"{azure_login_command(env_vars)} && az acr login --name {env_vars['registry_server'].replace('.azurecr.io','')} && porter \"\n f\"{msg_body['action']} {installation_id} \"\n f\" --reference {env_vars['registry_server']}/{msg_body['name']}:v{msg_body['version']}\"\n f\" {porter_parameters} --cred ./vm_porter/azure.json --allow-docker-host-access\"\n f\" && porter show {installation_id}\"]\n return command_line\n\n\ndef porter_envs(env_var):\n porter_env_vars = {}\n porter_env_vars[\"HOME\"] = os.environ['HOME']\n porter_env_vars[\"PATH\"] = os.environ['PATH']\n porter_env_vars[\"ARM_CLIENT_ID\"] = env_var[\"arm_client_id\"]\n porter_env_vars[\"ARM_CLIENT_SECRET\"] = env_var[\"arm_client_secret\"]\n porter_env_vars[\"ARM_SUBSCRIPTION_ID\"] = env_var[\"arm_subscription_id\"]\n porter_env_vars[\"ARM_TENANT_ID\"] = env_var[\"arm_tenant_id\"]\n\n return porter_env_vars\n\n\nasync def run_porter(command, env_vars):\n proc = await asyncio.create_subprocess_shell(\n ''.join(command),\n stdout=asyncio.subprocess.PIPE,\n stderr=asyncio.subprocess.PIPE,\n env=porter_envs(env_vars))\n\n stdout, stderr = await proc.communicate()\n logging.info(f'[{command!r} exited with {proc.returncode}]')\n result_stdout = None\n result_stderr = None\n if stdout:\n result_stdout = stdout.decode()\n logger_adapter.info('[stdout]')\n for string in result_stdout.split('\\n'):\n if len(string) != 0:\n logger_adapter.info(str(string))\n if stderr:\n result_stderr = stderr.decode()\n logger_adapter.info('[stderr]')\n for string in result_stderr.split('\\n'):\n if len(string) != 0:\n logger_adapter.info(str(string))\n\n return (proc.returncode, result_stdout, result_stderr)\n\n\ndef service_bus_message_generator(sb_message, status, deployment_message):\n installation_id = sb_message['parameters']['tre_id'] + \"-\" + sb_message['parameters']['workspace_id']\n resource_request_message = json.dumps({\n \"id\": sb_message[\"id\"],\n \"status\": status,\n \"message\": f\"{installation_id}: {deployment_message}\"\n })\n return resource_request_message\n\n\nasync def deploy_porter_bundle(msg_body, sb_client, env_vars, message_logger_adapter):\n installation_id = msg_body['parameters']['tre_id'] + \"-\" + msg_body['parameters']['workspace_id']\n message_logger_adapter.info(f\"{installation_id}: Deployment job configuration starting\")\n sb_sender = sb_client.get_queue_sender(queue_name=env_vars[\"deployment_status_queue\"])\n resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_DEPLOYING, \"Deployment job starting\")\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n\n returncode, _, err = await run_porter(build_porter_command(msg_body, env_vars), env_vars)\n if returncode != 0:\n error_message = \"Error context message = \" + \" \".join(err.split('\\n'))\n resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_FAILED, error_message)\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n message_logger_adapter.info(f\"{installation_id}: Deployment job configuration failed error = {error_message}\")\n return False\n else:\n success_message = \"Workspace was deployed successfully...\"\n resource_request_message = service_bus_message_generator(msg_body, strings.RESOURCE_STATUS_DEPLOYED, success_message)\n await sb_sender.send_messages(ServiceBusMessage(body=resource_request_message, correlation_id=msg_body[\"id\"]))\n message_logger_adapter.info(f\"{installation_id}: {success_message}\")\n return True\n\n\nasync def runner(env_vars):\n msi_id = env_vars[\"vmss_msi_id\"]\n service_bus_namespace = env_vars[\"service_bus_namespace\"]\n async with default_credentials(msi_id) as credential:\n service_bus_client = ServiceBusClient(service_bus_namespace, credential)\n logger_adapter.info(\"Starting message receiving loop...\")\n while True:\n logger_adapter.info(\"Checking for new messages...\")\n receive_message_gen = receive_message(env_vars, service_bus_client)\n try:\n async for message in receive_message_gen:\n logger_adapter.info(f\"Message received for id={message['id']}\")\n message_logger_adapter = initialize_logging(logging.INFO, message['id'])\n result = await deploy_porter_bundle(message, service_bus_client, env_vars, message_logger_adapter)\n await receive_message_gen.asend(result)\n except StopAsyncIteration: # the async generator when finished signals end with this exception.\n pass\n logger_adapter.info(\"All messages done sleeping...\")\n await asyncio.sleep(60)\n\n\ndef read_env_vars():\n env_vars = {\n # Needed for local dev\n \"app_id\": os.environ.get(\"AZURE_CLIENT_ID\", None),\n \"app_password\": os.environ.get(\"AZURE_CLIENT_SECRET\", None),\n\n \"registry_server\": os.environ[\"REGISTRY_SERVER\"],\n \"tfstate_container_name\": os.environ['TERRAFORM_STATE_CONTAINER_NAME'],\n \"tfstate_resource_group_name\": os.environ['MGMT_RESOURCE_GROUP_NAME'],\n \"tfstate_storage_account_name\": os.environ['MGMT_STORAGE_ACCOUNT_NAME'],\n \"deployment_status_queue\": os.environ['SERVICE_BUS_DEPLOYMENT_STATUS_UPDATE_QUEUE'],\n \"resource_request_queue\": os.environ['SERVICE_BUS_RESOURCE_REQUEST_QUEUE'],\n \"service_bus_namespace\": os.environ['SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE'],\n \"vmss_msi_id\": os.environ.get('VMSS_MSI_ID', None),\n\n # Needed for running porter\n \"arm_use_msi\": os.environ[\"ARM_USE_MSI\"],\n \"arm_subscription_id\": os.environ['ARM_SUBSCRIPTION_ID'],\n \"arm_client_id\": os.environ[\"ARM_CLIENT_ID\"],\n \"arm_tenant_id\": os.environ[\"ARM_TENANT_ID\"]\n }\n\n env_vars[\"arm_client_secret\"] = os.environ[\"ARM_CLIENT_SECRET\"] if env_vars[\"arm_use_msi\"] == \"false\" else \"\"\n\n return env_vars\n\n\nif __name__ == \"__main__\":\n try:\n env_vars = read_env_vars()\n except KeyError as e:\n logger_adapter.error(f\"Environment variable {e} is not set correctly...Exiting\")\n sys.exit(1)\n logger_adapter.info(\"Started processor\")\n asyncio.run(runner(env_vars))\n", "path": "processor_function/vm_porter/runner.py"}]} | 3,387 | 146 |
gh_patches_debug_948 | rasdani/github-patches | git_diff | deis__deis-280 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update chef_version in provisioning scripts
I see in the digitalocean support that @bacongobbler removed the --bootstrap-version=11.4.4 and things still seem to work with more current Chef (11.6.2). This wasn't the case before--the apt cookbook failed--so we had pinned it at a working version.
Let's retest that we're compatible with Chef 11.6.x and then remove --bootstrap-version from the provisioning scripts if so.
</issue>
<code>
[start of cm/chef.py]
1 """
2 Deis configuration management implementation for Opscode Chef.
3 """
4
5 from __future__ import unicode_literals
6
7 import os
8 import re
9 import subprocess
10 import tempfile
11 import time
12 import socket
13
14 from celery.canvas import group
15
16 from api.ssh import exec_ssh, connect_ssh
17 from cm.chef_api import ChefAPI
18
19
20 CHEF_CONFIG_PATH = '/etc/chef'
21 CHEF_INSTALL_TYPE = 'gems'
22 CHEF_RUBY_VERSION = '1.9.1'
23 CHEF_ENVIRONMENT = '_default'
24 CHEF_CLIENT_VERSION = '11.4.4'
25
26 # load chef config using CHEF_CONFIG_PATH
27 try:
28 # parse controller's chef config for server_url and client_name
29 _client_cfg_path = os.path.join(CHEF_CONFIG_PATH, 'client.rb')
30 if not os.path.exists(_client_cfg_path):
31 raise EnvironmentError('Could not find {}'.format(_client_cfg_path))
32 with open(_client_cfg_path) as f:
33 _data = f.read()
34 # construct a dict from the ruby client.rb
35 _d = {}
36 for m in re.findall(r'''^([a-zA-Z0-9_]+)[ \t]+(.*)$''',
37 _data, re.MULTILINE):
38 _d[m[0]] = m[1].strip("'").strip('"')
39 # set global variables from client.rb
40 CHEF_SERVER_URL = _d['chef_server_url']
41 CHEF_NODE_NAME = _d.get('node_name', socket.gethostname())
42 CHEF_CLIENT_NAME = _d.get('node_name', socket.gethostname())
43 CHEF_VALIDATION_NAME = _d['validation_client_name']
44 # read the client key
45 _client_pem_path = os.path.join(CHEF_CONFIG_PATH, 'client.pem')
46 CHEF_CLIENT_KEY = subprocess.check_output(
47 ['sudo', '/bin/cat', _client_pem_path]).strip('\n')
48 # read the validation key
49 _valid_pem_path = os.path.join(CHEF_CONFIG_PATH, 'validation.pem')
50 CHEF_VALIDATION_KEY = subprocess.check_output(
51 ['sudo', '/bin/cat', _valid_pem_path]).strip('\n')
52 except Exception as err:
53 msg = "Failed to auto-configure Chef -- {}".format(err)
54 if os.environ.get('READTHEDOCS'):
55 # Just print the error if Sphinx is running
56 print(msg)
57 else:
58 raise EnvironmentError(msg)
59
60
61 def _get_client():
62 """
63 Return a new instance of a Chef API Client
64
65 :rtype: a :class:`~cm.chef_api.ChefAPI` object
66 """
67 return ChefAPI(CHEF_SERVER_URL, CHEF_CLIENT_NAME, CHEF_CLIENT_KEY)
68
69
70 def bootstrap_node(node):
71 """
72 Bootstrap the Chef configuration management tools onto a node.
73
74 :param node: a dict containing the node's fully-qualified domain name and SSH info
75 :raises: RuntimeError
76 """
77 # block until we can connect over ssh
78 ssh = connect_ssh(node['ssh_username'], node['fqdn'], node.get('ssh_port', 22),
79 node['ssh_private_key'], timeout=120)
80 # block until ubuntu cloud-init is finished
81 initializing = True
82 while initializing:
83 time.sleep(10)
84 initializing, _rc = exec_ssh(ssh, 'ps auxw | egrep "cloud-init" | grep -v egrep')
85 # write out private key and prepare to `knife bootstrap`
86 try:
87 _, pk_path = tempfile.mkstemp()
88 _, output_path = tempfile.mkstemp()
89 with open(pk_path, 'w') as f:
90 f.write(node['ssh_private_key'])
91 # build knife bootstrap command
92 args = ['knife', 'bootstrap', node['fqdn']]
93 args.extend(['--identity-file', pk_path])
94 args.extend(['--node-name', node['id']])
95 args.extend(['--sudo', '--ssh-user', node['ssh_username']])
96 args.extend(['--ssh-port', str(node.get('ssh_port', 22))])
97 args.extend(['--bootstrap-version', CHEF_CLIENT_VERSION])
98 args.extend(['--no-host-key-verify'])
99 args.extend(['--run-list', _construct_run_list(node)])
100 print(' '.join(args))
101 # tee the command's output to a tempfile
102 args.extend(['|', 'tee', output_path])
103 # TODO: figure out why home isn't being set correctly for knife exec
104 env = os.environ.copy()
105 env['HOME'] = '/opt/deis'
106 # execute knife bootstrap
107 p = subprocess.Popen(' '.join(args), env=env, shell=True)
108 rc = p.wait()
109 # always print knife output
110 with open(output_path) as f:
111 output = f.read()
112 print(output)
113 # raise an exception if bootstrap failed
114 if rc != 0:
115 raise RuntimeError('Node Bootstrap Error')
116 # remove temp files from filesystem
117 finally:
118 os.remove(pk_path)
119 os.remove(output_path)
120
121
122 def _construct_run_list(node):
123 config = node['config']
124 # if run_list override specified, use it (assumes csv)
125 run_list = config.get('run_list', [])
126 # otherwise construct a run_list using proxy/runtime flags
127 if not run_list:
128 run_list = ['recipe[deis]']
129 if node.get('runtime') is True:
130 run_list.append('recipe[deis::runtime]')
131 if node.get('proxy') is True:
132 run_list.append('recipe[deis::proxy]')
133 return ','.join(run_list)
134
135
136 def purge_node(node):
137 """
138 Purge a node and its client from Chef configuration management.
139
140 :param node: a dict containing the id of a node to purge
141 """
142 client = _get_client()
143 client.delete_node(node['id'])
144 client.delete_client(node['id'])
145
146
147 def converge_controller():
148 """
149 Converge this controller node.
150
151 "Converge" means to change a node's configuration to match that defined by
152 configuration management.
153
154 :returns: the output of the convergence command, in this case `sudo chef-client`
155 """
156 try:
157 return subprocess.check_output(['sudo', 'chef-client'])
158 except subprocess.CalledProcessError as err:
159 print(err)
160 print(err.output)
161 raise err
162
163
164 def converge_node(node):
165 """
166 Converge a node.
167
168 "Converge" means to change a node's configuration to match that defined by
169 configuration management.
170
171 :param node: a dict containing the node's fully-qualified domain name and SSH info
172 :returns: a tuple of the convergence command's (output, return_code)
173 """
174 ssh = connect_ssh(node['ssh_username'],
175 node['fqdn'], 22,
176 node['ssh_private_key'])
177 output, rc = exec_ssh(ssh, 'sudo chef-client')
178 print(output)
179 if rc != 0:
180 e = RuntimeError('Node converge error')
181 e.output = output
182 raise e
183 return output, rc
184
185
186 def run_node(node, command):
187 """
188 Run a command on a node.
189
190 :param node: a dict containing the node's fully-qualified domain name and SSH info
191 :param command: the command-line to execute on the node
192 :returns: a tuple of the command's (output, return_code)
193 """
194 ssh = connect_ssh(node['ssh_username'], node['fqdn'],
195 node['ssh_port'], node['ssh_private_key'])
196 output, rc = exec_ssh(ssh, command, pty=True)
197 return output, rc
198
199
200 def converge_formation(formation):
201 """
202 Converge all nodes in a formation.
203
204 "Converge" means to change a node's configuration to match that defined by
205 configuration management.
206
207 :param formation: a :class:`~api.models.Formation` to converge
208 :returns: the combined output of the nodes' convergence commands
209 """
210 nodes = formation.node_set.all()
211 subtasks = []
212 for n in nodes:
213 subtask = converge_node.s(n.id,
214 n.layer.flavor.ssh_username,
215 n.fqdn,
216 n.layer.flavor.ssh_private_key)
217 subtasks.append(subtask)
218 job = group(*subtasks)
219 return job.apply_async().join()
220
221
222 def publish_user(user, data):
223 """
224 Publish a user to configuration management.
225
226 :param user: a dict containing the username
227 :param data: data to store with the user
228 :returns: a tuple of (body, status) from the underlying HTTP response
229 :raises: RuntimeError
230 """
231 _publish('deis-users', user['username'], data)
232
233
234 def publish_app(app, data):
235 """
236 Publish an app to configuration management.
237
238 :param app: a dict containing the id of the app
239 :param data: data to store with the app
240 :returns: a tuple of (body, status) from the underlying HTTP response
241 :raises: RuntimeError
242 """
243 _publish('deis-apps', app['id'], data)
244
245
246 def purge_app(app):
247 """
248 Purge an app from configuration management.
249
250 :param app: a dict containing the id of the app
251 :returns: a tuple of (body, status) from the underlying HTTP response
252 :raises: RuntimeError
253 """
254 _purge('deis-apps', app['id'])
255
256
257 def publish_formation(formation, data):
258 """
259 Publish a formation to configuration management.
260
261 :param formation: a dict containing the id of the formation
262 :param data: data to store with the formation
263 :returns: a tuple of (body, status) from the underlying HTTP response
264 :raises: RuntimeError
265 """
266 _publish('deis-formations', formation['id'], data)
267
268
269 def purge_formation(formation):
270 """
271 Purge a formation from configuration management.
272
273 :param formation: a dict containing the id of the formation
274 :returns: a tuple of (body, status) from the underlying HTTP response
275 :raises: RuntimeError
276 """
277 _purge('deis-formations', formation['id'])
278
279
280 def _publish(data_bag, item_name, item_value):
281 """
282 Publish a data bag item to the Chef server.
283
284 :param data_bag: the name of a Chef data bag
285 :param item_name: the name of the item to publish
286 :param item_value: the value of the item to publish
287 :returns: a tuple of (body, status) from the underlying HTTP response
288 :raises: RuntimeError
289 """
290 client = _get_client()
291 body, status = client.update_databag_item(data_bag, item_name, item_value)
292 if status != 200:
293 body, status = client.create_databag_item(data_bag, item_name, item_value)
294 if status != 201:
295 raise RuntimeError('Could not publish {item_name}: {body}'.format(**locals()))
296 return body, status
297
298
299 def _purge(databag_name, item_name):
300 """
301 Purge a data bag item from the Chef server.
302
303 :param databag_name: the name of a Chef data bag
304 :param item_name: the name of the item to purge
305 :returns: a tuple of (body, status) from the underlying HTTP response
306 :raises: RuntimeError
307 """
308 client = _get_client()
309 body, status = client.delete_databag_item(databag_name, item_name)
310 if status == 200 or status == 404:
311 return body, status
312 raise RuntimeError('Could not purge {item_name}: {body}'.format(**locals()))
313
[end of cm/chef.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cm/chef.py b/cm/chef.py
--- a/cm/chef.py
+++ b/cm/chef.py
@@ -21,7 +21,7 @@
CHEF_INSTALL_TYPE = 'gems'
CHEF_RUBY_VERSION = '1.9.1'
CHEF_ENVIRONMENT = '_default'
-CHEF_CLIENT_VERSION = '11.4.4'
+CHEF_CLIENT_VERSION = '11.6.2'
# load chef config using CHEF_CONFIG_PATH
try:
| {"golden_diff": "diff --git a/cm/chef.py b/cm/chef.py\n--- a/cm/chef.py\n+++ b/cm/chef.py\n@@ -21,7 +21,7 @@\n CHEF_INSTALL_TYPE = 'gems'\n CHEF_RUBY_VERSION = '1.9.1'\n CHEF_ENVIRONMENT = '_default'\n-CHEF_CLIENT_VERSION = '11.4.4'\n+CHEF_CLIENT_VERSION = '11.6.2'\n \n # load chef config using CHEF_CONFIG_PATH\n try:\n", "issue": "Update chef_version in provisioning scripts\nI see in the digitalocean support that @bacongobbler removed the --bootstrap-version=11.4.4 and things still seem to work with more current Chef (11.6.2). This wasn't the case before--the apt cookbook failed--so we had pinned it at a working version.\n\nLet's retest that we're compatible with Chef 11.6.x and then remove --bootstrap-version from the provisioning scripts if so.\n\n", "before_files": [{"content": "\"\"\"\nDeis configuration management implementation for Opscode Chef.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport re\nimport subprocess\nimport tempfile\nimport time\nimport socket\n\nfrom celery.canvas import group\n\nfrom api.ssh import exec_ssh, connect_ssh\nfrom cm.chef_api import ChefAPI\n\n\nCHEF_CONFIG_PATH = '/etc/chef'\nCHEF_INSTALL_TYPE = 'gems'\nCHEF_RUBY_VERSION = '1.9.1'\nCHEF_ENVIRONMENT = '_default'\nCHEF_CLIENT_VERSION = '11.4.4'\n\n# load chef config using CHEF_CONFIG_PATH\ntry:\n # parse controller's chef config for server_url and client_name\n _client_cfg_path = os.path.join(CHEF_CONFIG_PATH, 'client.rb')\n if not os.path.exists(_client_cfg_path):\n raise EnvironmentError('Could not find {}'.format(_client_cfg_path))\n with open(_client_cfg_path) as f:\n _data = f.read()\n # construct a dict from the ruby client.rb\n _d = {}\n for m in re.findall(r'''^([a-zA-Z0-9_]+)[ \\t]+(.*)$''',\n _data, re.MULTILINE):\n _d[m[0]] = m[1].strip(\"'\").strip('\"')\n # set global variables from client.rb\n CHEF_SERVER_URL = _d['chef_server_url']\n CHEF_NODE_NAME = _d.get('node_name', socket.gethostname())\n CHEF_CLIENT_NAME = _d.get('node_name', socket.gethostname())\n CHEF_VALIDATION_NAME = _d['validation_client_name']\n # read the client key\n _client_pem_path = os.path.join(CHEF_CONFIG_PATH, 'client.pem')\n CHEF_CLIENT_KEY = subprocess.check_output(\n ['sudo', '/bin/cat', _client_pem_path]).strip('\\n')\n # read the validation key\n _valid_pem_path = os.path.join(CHEF_CONFIG_PATH, 'validation.pem')\n CHEF_VALIDATION_KEY = subprocess.check_output(\n ['sudo', '/bin/cat', _valid_pem_path]).strip('\\n')\nexcept Exception as err:\n msg = \"Failed to auto-configure Chef -- {}\".format(err)\n if os.environ.get('READTHEDOCS'):\n # Just print the error if Sphinx is running\n print(msg)\n else:\n raise EnvironmentError(msg)\n\n\ndef _get_client():\n \"\"\"\n Return a new instance of a Chef API Client\n\n :rtype: a :class:`~cm.chef_api.ChefAPI` object\n \"\"\"\n return ChefAPI(CHEF_SERVER_URL, CHEF_CLIENT_NAME, CHEF_CLIENT_KEY)\n\n\ndef bootstrap_node(node):\n \"\"\"\n Bootstrap the Chef configuration management tools onto a node.\n\n :param node: a dict containing the node's fully-qualified domain name and SSH info\n :raises: RuntimeError\n \"\"\"\n # block until we can connect over ssh\n ssh = connect_ssh(node['ssh_username'], node['fqdn'], node.get('ssh_port', 22),\n node['ssh_private_key'], timeout=120)\n # block until ubuntu cloud-init is finished\n initializing = True\n while initializing:\n time.sleep(10)\n initializing, _rc = exec_ssh(ssh, 'ps auxw | egrep \"cloud-init\" | grep -v egrep')\n # write out private key and prepare to `knife bootstrap`\n try:\n _, pk_path = tempfile.mkstemp()\n _, output_path = tempfile.mkstemp()\n with open(pk_path, 'w') as f:\n f.write(node['ssh_private_key'])\n # build knife bootstrap command\n args = ['knife', 'bootstrap', node['fqdn']]\n args.extend(['--identity-file', pk_path])\n args.extend(['--node-name', node['id']])\n args.extend(['--sudo', '--ssh-user', node['ssh_username']])\n args.extend(['--ssh-port', str(node.get('ssh_port', 22))])\n args.extend(['--bootstrap-version', CHEF_CLIENT_VERSION])\n args.extend(['--no-host-key-verify'])\n args.extend(['--run-list', _construct_run_list(node)])\n print(' '.join(args))\n # tee the command's output to a tempfile\n args.extend(['|', 'tee', output_path])\n # TODO: figure out why home isn't being set correctly for knife exec\n env = os.environ.copy()\n env['HOME'] = '/opt/deis'\n # execute knife bootstrap\n p = subprocess.Popen(' '.join(args), env=env, shell=True)\n rc = p.wait()\n # always print knife output\n with open(output_path) as f:\n output = f.read()\n print(output)\n # raise an exception if bootstrap failed\n if rc != 0:\n raise RuntimeError('Node Bootstrap Error')\n # remove temp files from filesystem\n finally:\n os.remove(pk_path)\n os.remove(output_path)\n\n\ndef _construct_run_list(node):\n config = node['config']\n # if run_list override specified, use it (assumes csv)\n run_list = config.get('run_list', [])\n # otherwise construct a run_list using proxy/runtime flags\n if not run_list:\n run_list = ['recipe[deis]']\n if node.get('runtime') is True:\n run_list.append('recipe[deis::runtime]')\n if node.get('proxy') is True:\n run_list.append('recipe[deis::proxy]')\n return ','.join(run_list)\n\n\ndef purge_node(node):\n \"\"\"\n Purge a node and its client from Chef configuration management.\n\n :param node: a dict containing the id of a node to purge\n \"\"\"\n client = _get_client()\n client.delete_node(node['id'])\n client.delete_client(node['id'])\n\n\ndef converge_controller():\n \"\"\"\n Converge this controller node.\n\n \"Converge\" means to change a node's configuration to match that defined by\n configuration management.\n\n :returns: the output of the convergence command, in this case `sudo chef-client`\n \"\"\"\n try:\n return subprocess.check_output(['sudo', 'chef-client'])\n except subprocess.CalledProcessError as err:\n print(err)\n print(err.output)\n raise err\n\n\ndef converge_node(node):\n \"\"\"\n Converge a node.\n\n \"Converge\" means to change a node's configuration to match that defined by\n configuration management.\n\n :param node: a dict containing the node's fully-qualified domain name and SSH info\n :returns: a tuple of the convergence command's (output, return_code)\n \"\"\"\n ssh = connect_ssh(node['ssh_username'],\n node['fqdn'], 22,\n node['ssh_private_key'])\n output, rc = exec_ssh(ssh, 'sudo chef-client')\n print(output)\n if rc != 0:\n e = RuntimeError('Node converge error')\n e.output = output\n raise e\n return output, rc\n\n\ndef run_node(node, command):\n \"\"\"\n Run a command on a node.\n\n :param node: a dict containing the node's fully-qualified domain name and SSH info\n :param command: the command-line to execute on the node\n :returns: a tuple of the command's (output, return_code)\n \"\"\"\n ssh = connect_ssh(node['ssh_username'], node['fqdn'],\n node['ssh_port'], node['ssh_private_key'])\n output, rc = exec_ssh(ssh, command, pty=True)\n return output, rc\n\n\ndef converge_formation(formation):\n \"\"\"\n Converge all nodes in a formation.\n\n \"Converge\" means to change a node's configuration to match that defined by\n configuration management.\n\n :param formation: a :class:`~api.models.Formation` to converge\n :returns: the combined output of the nodes' convergence commands\n \"\"\"\n nodes = formation.node_set.all()\n subtasks = []\n for n in nodes:\n subtask = converge_node.s(n.id,\n n.layer.flavor.ssh_username,\n n.fqdn,\n n.layer.flavor.ssh_private_key)\n subtasks.append(subtask)\n job = group(*subtasks)\n return job.apply_async().join()\n\n\ndef publish_user(user, data):\n \"\"\"\n Publish a user to configuration management.\n\n :param user: a dict containing the username\n :param data: data to store with the user\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _publish('deis-users', user['username'], data)\n\n\ndef publish_app(app, data):\n \"\"\"\n Publish an app to configuration management.\n\n :param app: a dict containing the id of the app\n :param data: data to store with the app\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _publish('deis-apps', app['id'], data)\n\n\ndef purge_app(app):\n \"\"\"\n Purge an app from configuration management.\n\n :param app: a dict containing the id of the app\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _purge('deis-apps', app['id'])\n\n\ndef publish_formation(formation, data):\n \"\"\"\n Publish a formation to configuration management.\n\n :param formation: a dict containing the id of the formation\n :param data: data to store with the formation\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _publish('deis-formations', formation['id'], data)\n\n\ndef purge_formation(formation):\n \"\"\"\n Purge a formation from configuration management.\n\n :param formation: a dict containing the id of the formation\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n _purge('deis-formations', formation['id'])\n\n\ndef _publish(data_bag, item_name, item_value):\n \"\"\"\n Publish a data bag item to the Chef server.\n\n :param data_bag: the name of a Chef data bag\n :param item_name: the name of the item to publish\n :param item_value: the value of the item to publish\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n client = _get_client()\n body, status = client.update_databag_item(data_bag, item_name, item_value)\n if status != 200:\n body, status = client.create_databag_item(data_bag, item_name, item_value)\n if status != 201:\n raise RuntimeError('Could not publish {item_name}: {body}'.format(**locals()))\n return body, status\n\n\ndef _purge(databag_name, item_name):\n \"\"\"\n Purge a data bag item from the Chef server.\n\n :param databag_name: the name of a Chef data bag\n :param item_name: the name of the item to purge\n :returns: a tuple of (body, status) from the underlying HTTP response\n :raises: RuntimeError\n \"\"\"\n client = _get_client()\n body, status = client.delete_databag_item(databag_name, item_name)\n if status == 200 or status == 404:\n return body, status\n raise RuntimeError('Could not purge {item_name}: {body}'.format(**locals()))\n", "path": "cm/chef.py"}]} | 4,004 | 110 |
gh_patches_debug_22524 | rasdani/github-patches | git_diff | napari__napari-1402 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Colormap in 3D broken by Image Layer Event Handler
## 🐛 Bug
Changing the colormap in 3D doesn't cause the colormap for the actual data to update. The thumbnail does update. This was likely introduced in #1376. Changing colormap in 2D still works fine.
</issue>
<code>
[start of napari/_vispy/vispy_image_layer.py]
1 import warnings
2 from vispy.scene.visuals import Image as ImageNode
3 from .volume import Volume as VolumeNode
4 from vispy.color import Colormap
5 import numpy as np
6 from .vispy_base_layer import VispyBaseLayer
7 from ..layers.image._image_constants import Rendering
8 from ..utils.colormaps import ensure_colormap_tuple
9
10
11 texture_dtypes = [
12 np.dtype(np.int8),
13 np.dtype(np.uint8),
14 np.dtype(np.int16),
15 np.dtype(np.uint16),
16 np.dtype(np.float32),
17 ]
18
19
20 class VispyImageLayer(VispyBaseLayer):
21 def __init__(self, layer):
22 node = ImageNode(None, method='auto')
23 super().__init__(layer, node)
24
25 # Once #1842 and #1844 from vispy are released and gamma adjustment is
26 # done on the GPU these can be dropped
27 self._raw_cmap = None
28 self._gamma = 1
29
30 # Until we add a specific attenuation parameter to vispy we have to
31 # track both iso_threshold and attenuation ourselves.
32 self._iso_threshold = 1
33 self._attenuation = 1
34
35 self._on_display_change()
36 self._on_slice_data_change()
37
38 def _on_display_change(self, data=None):
39 parent = self.node.parent
40 self.node.parent = None
41
42 if self.layer.dims.ndisplay == 2:
43 self.node = ImageNode(data, method='auto')
44 else:
45 if data is None:
46 data = np.zeros((1, 1, 1))
47 self.node = VolumeNode(data, clim=self.layer.contrast_limits)
48
49 self.node.parent = parent
50 self.reset()
51
52 def _on_slice_data_change(self, event=None):
53 # Slice data event will be fixed to use passed value after EVH refactor
54 # is finished for all layers
55 data = self.layer._data_view
56 dtype = np.dtype(data.dtype)
57 if dtype not in texture_dtypes:
58 try:
59 dtype = dict(
60 i=np.int16, f=np.float32, u=np.uint16, b=np.uint8
61 )[dtype.kind]
62 except KeyError: # not an int or float
63 raise TypeError(
64 f'type {dtype} not allowed for texture; must be one of {set(texture_dtypes)}' # noqa: E501
65 )
66 data = data.astype(dtype)
67
68 if self.layer.dims.ndisplay == 3 and self.layer.dims.ndim == 2:
69 data = np.expand_dims(data, axis=0)
70
71 # Check if data exceeds MAX_TEXTURE_SIZE and downsample
72 if (
73 self.MAX_TEXTURE_SIZE_2D is not None
74 and self.layer.dims.ndisplay == 2
75 ):
76 data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_2D)
77 elif (
78 self.MAX_TEXTURE_SIZE_3D is not None
79 and self.layer.dims.ndisplay == 3
80 ):
81 data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_3D)
82
83 # Check if ndisplay has changed current node type needs updating
84 if (
85 self.layer.dims.ndisplay == 3
86 and not isinstance(self.node, VolumeNode)
87 ) or (
88 self.layer.dims.ndisplay == 2
89 and not isinstance(self.node, ImageNode)
90 ):
91 self._on_display_change(data)
92 else:
93 if self.layer.dims.ndisplay == 2:
94 self.node._need_colortransform_update = True
95 self.node.set_data(data)
96 else:
97 self.node.set_data(data, clim=self.layer.contrast_limits)
98
99 # Call to update order of translation values with new dims:
100 self._on_scale_change()
101 self._on_translate_change()
102 self.node.update()
103
104 def _on_interpolation_change(self, interpolation):
105 """Receive layer model isosurface change event and update the visual.
106
107 Parameters
108 ----------
109 interpolation : float
110 Iso surface threshold value, between 0 and 1.
111 """
112 self.node.interpolation = interpolation
113
114 def _on_rendering_change(self, rendering):
115 """Receive layer model rendering change event and update dropdown menu.
116
117 Parameters
118 ----------
119 text : str
120 Rendering mode used by VisPy.
121 Selects a preset rendering mode in VisPy that determines how
122 volume is displayed:
123 * translucent: voxel colors are blended along the view ray until
124 the result is opaque.
125 * mip: maxiumum intensity projection. Cast a ray and display the
126 maximum value that was encountered.
127 * additive: voxel colors are added along the view ray until
128 the result is saturated.
129 * iso: isosurface. Cast a ray until a certain threshold is
130 encountered. At that location, lighning calculations are
131 performed to give the visual appearance of a surface.
132 * attenuated_mip: attenuated maxiumum intensity projection. Cast a
133 ray and attenuate values based on integral of encountered values,
134 display the maximum value that was encountered after attenuation.
135 This will make nearer objects appear more prominent.
136 """
137 if isinstance(self.node, VolumeNode):
138 self.node.method = rendering
139 if Rendering(rendering) == Rendering.ISO:
140 self.node.threshold = float(self._iso_threshold)
141 elif Rendering(rendering) == Rendering.ATTENUATED_MIP:
142 self.node.threshold = float(self._attenuation)
143
144 def _on_colormap_change(self, colormap):
145 """Receive layer model colormap change event and update the visual.
146
147 Parameters
148 ----------
149 colormap : str or tuple
150 Colormap name or tuple of (name, vispy.color.Colormap).
151 """
152 name, cmap = ensure_colormap_tuple(colormap)
153 # Once #1842 and #1844 from vispy are released and gamma adjustment is
154 # done on the GPU this can be dropped
155 self._raw_cmap = cmap
156 if self._gamma != 1:
157 # when gamma!=1, we instantiate a new colormap with 256 control
158 # points from 0-1
159 node_cmap = Colormap(cmap[np.linspace(0, 1, 256) ** self._gamma])
160 else:
161 node_cmap = cmap
162 self.node.cmap = node_cmap
163
164 def _on_contrast_limits_change(self, contrast_limits):
165 """Receive layer model contrast limits change event and update visual.
166
167 Parameters
168 ----------
169 contrast_limits : tuple
170 Contrast limits.
171 """
172 # Once #1842 from vispy is released this if else can be dropped
173 if isinstance(self.node, VolumeNode):
174 self._on_slice_data_change()
175 else:
176 self.node.clim = contrast_limits
177
178 def _on_gamma_change(self, gamma):
179 """Receive the layer model gamma change event and update the visual.
180
181 Parameters
182 ----------
183 gamma : float
184 Gamma value.
185 """
186 # Once #1842 and #1844 from vispy are released and gamma adjustment is
187 # done on the GPU this can be dropped
188 if gamma != 1:
189 # when gamma!=1, we instantiate a new colormap with 256 control
190 # points from 0-1
191 cmap = Colormap(self._raw_cmap[np.linspace(0, 1, 256) ** gamma])
192 else:
193 cmap = self._raw_cmap
194 self._gamma = gamma
195 self.node.cmap = cmap
196
197 def _on_iso_threshold_change(self, iso_threshold):
198 """Receive layer model isosurface change event and update the visual.
199
200 Parameters
201 ----------
202 iso_threshold : float
203 Iso surface threshold value, between 0 and 1.
204 """
205 if (
206 isinstance(self.node, VolumeNode)
207 and Rendering(self.node.method) == Rendering.ISO
208 ):
209 self._iso_threshold = iso_threshold
210 self.node.threshold = float(iso_threshold)
211
212 def _on_attenuation_change(self, attenuation):
213 """Receive layer model attenuation change event and update the visual.
214
215 Parameters
216 ----------
217 attenuation : float
218 Attenuation value, between 0 and 2.
219 """
220 if (
221 isinstance(self.node, VolumeNode)
222 and Rendering(self.node.method) == Rendering.ATTENUATED_MIP
223 ):
224 self._attenuation = attenuation
225 self.node.threshold = float(attenuation)
226
227 def reset(self, event=None):
228 self._reset_base()
229 self._on_colormap_change(self.layer.colormap)
230 self._on_rendering_change(self.layer.rendering)
231 if isinstance(self.node, ImageNode):
232 self._on_contrast_limits_change(self.layer.contrast_limits)
233
234 def downsample_texture(self, data, MAX_TEXTURE_SIZE):
235 """Downsample data based on maximum allowed texture size.
236
237 Parameters
238 ----------
239 data : array
240 Data to be downsampled if needed.
241 MAX_TEXTURE_SIZE : int
242 Maximum allowed texture size.
243
244 Returns
245 -------
246 data : array
247 Data that now fits inside texture.
248 """
249 if np.any(np.greater(data.shape, MAX_TEXTURE_SIZE)):
250 if self.layer.multiscale:
251 raise ValueError(
252 f"Shape of individual tiles in multiscale {data.shape} "
253 f"cannot exceed GL_MAX_TEXTURE_SIZE "
254 f"{MAX_TEXTURE_SIZE}. Rendering is currently in "
255 f"{self.layer.dims.ndisplay}D mode."
256 )
257 warnings.warn(
258 f"data shape {data.shape} exceeds GL_MAX_TEXTURE_SIZE "
259 f"{MAX_TEXTURE_SIZE} in at least one axis and "
260 f"will be downsampled. Rendering is currently in "
261 f"{self.layer.dims.ndisplay}D mode."
262 )
263 downsample = np.ceil(
264 np.divide(data.shape, MAX_TEXTURE_SIZE)
265 ).astype(int)
266 scale = np.ones(self.layer.ndim)
267 for i, d in enumerate(self.layer.dims.displayed):
268 scale[d] = downsample[i]
269 self.layer._transforms['tile2data'].scale = scale
270 self._on_scale_change()
271 slices = tuple(slice(None, None, ds) for ds in downsample)
272 data = data[slices]
273 return data
274
[end of napari/_vispy/vispy_image_layer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/napari/_vispy/vispy_image_layer.py b/napari/_vispy/vispy_image_layer.py
--- a/napari/_vispy/vispy_image_layer.py
+++ b/napari/_vispy/vispy_image_layer.py
@@ -159,6 +159,13 @@
node_cmap = Colormap(cmap[np.linspace(0, 1, 256) ** self._gamma])
else:
node_cmap = cmap
+ # Following should be added to cmap setter in VisPy volume visual
+ if isinstance(self.node, VolumeNode):
+ self.node.view_program['texture2D_LUT'] = (
+ node_cmap.texture_lut()
+ if (hasattr(node_cmap, 'texture_lut'))
+ else None
+ )
self.node.cmap = node_cmap
def _on_contrast_limits_change(self, contrast_limits):
@@ -192,6 +199,11 @@
else:
cmap = self._raw_cmap
self._gamma = gamma
+ # Following should be added to cmap setter in VisPy volume visual
+ if isinstance(self.node, VolumeNode):
+ self.node.view_program['texture2D_LUT'] = (
+ cmap.texture_lut() if (hasattr(cmap, 'texture_lut')) else None
+ )
self.node.cmap = cmap
def _on_iso_threshold_change(self, iso_threshold):
| {"golden_diff": "diff --git a/napari/_vispy/vispy_image_layer.py b/napari/_vispy/vispy_image_layer.py\n--- a/napari/_vispy/vispy_image_layer.py\n+++ b/napari/_vispy/vispy_image_layer.py\n@@ -159,6 +159,13 @@\n node_cmap = Colormap(cmap[np.linspace(0, 1, 256) ** self._gamma])\n else:\n node_cmap = cmap\n+ # Following should be added to cmap setter in VisPy volume visual\n+ if isinstance(self.node, VolumeNode):\n+ self.node.view_program['texture2D_LUT'] = (\n+ node_cmap.texture_lut()\n+ if (hasattr(node_cmap, 'texture_lut'))\n+ else None\n+ )\n self.node.cmap = node_cmap\n \n def _on_contrast_limits_change(self, contrast_limits):\n@@ -192,6 +199,11 @@\n else:\n cmap = self._raw_cmap\n self._gamma = gamma\n+ # Following should be added to cmap setter in VisPy volume visual\n+ if isinstance(self.node, VolumeNode):\n+ self.node.view_program['texture2D_LUT'] = (\n+ cmap.texture_lut() if (hasattr(cmap, 'texture_lut')) else None\n+ )\n self.node.cmap = cmap\n \n def _on_iso_threshold_change(self, iso_threshold):\n", "issue": "Colormap in 3D broken by Image Layer Event Handler\n## \ud83d\udc1b Bug\r\n\r\nChanging the colormap in 3D doesn't cause the colormap for the actual data to update. The thumbnail does update. This was likely introduced in #1376. Changing colormap in 2D still works fine.\n", "before_files": [{"content": "import warnings\nfrom vispy.scene.visuals import Image as ImageNode\nfrom .volume import Volume as VolumeNode\nfrom vispy.color import Colormap\nimport numpy as np\nfrom .vispy_base_layer import VispyBaseLayer\nfrom ..layers.image._image_constants import Rendering\nfrom ..utils.colormaps import ensure_colormap_tuple\n\n\ntexture_dtypes = [\n np.dtype(np.int8),\n np.dtype(np.uint8),\n np.dtype(np.int16),\n np.dtype(np.uint16),\n np.dtype(np.float32),\n]\n\n\nclass VispyImageLayer(VispyBaseLayer):\n def __init__(self, layer):\n node = ImageNode(None, method='auto')\n super().__init__(layer, node)\n\n # Once #1842 and #1844 from vispy are released and gamma adjustment is\n # done on the GPU these can be dropped\n self._raw_cmap = None\n self._gamma = 1\n\n # Until we add a specific attenuation parameter to vispy we have to\n # track both iso_threshold and attenuation ourselves.\n self._iso_threshold = 1\n self._attenuation = 1\n\n self._on_display_change()\n self._on_slice_data_change()\n\n def _on_display_change(self, data=None):\n parent = self.node.parent\n self.node.parent = None\n\n if self.layer.dims.ndisplay == 2:\n self.node = ImageNode(data, method='auto')\n else:\n if data is None:\n data = np.zeros((1, 1, 1))\n self.node = VolumeNode(data, clim=self.layer.contrast_limits)\n\n self.node.parent = parent\n self.reset()\n\n def _on_slice_data_change(self, event=None):\n # Slice data event will be fixed to use passed value after EVH refactor\n # is finished for all layers\n data = self.layer._data_view\n dtype = np.dtype(data.dtype)\n if dtype not in texture_dtypes:\n try:\n dtype = dict(\n i=np.int16, f=np.float32, u=np.uint16, b=np.uint8\n )[dtype.kind]\n except KeyError: # not an int or float\n raise TypeError(\n f'type {dtype} not allowed for texture; must be one of {set(texture_dtypes)}' # noqa: E501\n )\n data = data.astype(dtype)\n\n if self.layer.dims.ndisplay == 3 and self.layer.dims.ndim == 2:\n data = np.expand_dims(data, axis=0)\n\n # Check if data exceeds MAX_TEXTURE_SIZE and downsample\n if (\n self.MAX_TEXTURE_SIZE_2D is not None\n and self.layer.dims.ndisplay == 2\n ):\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_2D)\n elif (\n self.MAX_TEXTURE_SIZE_3D is not None\n and self.layer.dims.ndisplay == 3\n ):\n data = self.downsample_texture(data, self.MAX_TEXTURE_SIZE_3D)\n\n # Check if ndisplay has changed current node type needs updating\n if (\n self.layer.dims.ndisplay == 3\n and not isinstance(self.node, VolumeNode)\n ) or (\n self.layer.dims.ndisplay == 2\n and not isinstance(self.node, ImageNode)\n ):\n self._on_display_change(data)\n else:\n if self.layer.dims.ndisplay == 2:\n self.node._need_colortransform_update = True\n self.node.set_data(data)\n else:\n self.node.set_data(data, clim=self.layer.contrast_limits)\n\n # Call to update order of translation values with new dims:\n self._on_scale_change()\n self._on_translate_change()\n self.node.update()\n\n def _on_interpolation_change(self, interpolation):\n \"\"\"Receive layer model isosurface change event and update the visual.\n\n Parameters\n ----------\n interpolation : float\n Iso surface threshold value, between 0 and 1.\n \"\"\"\n self.node.interpolation = interpolation\n\n def _on_rendering_change(self, rendering):\n \"\"\"Receive layer model rendering change event and update dropdown menu.\n\n Parameters\n ----------\n text : str\n Rendering mode used by VisPy.\n Selects a preset rendering mode in VisPy that determines how\n volume is displayed:\n * translucent: voxel colors are blended along the view ray until\n the result is opaque.\n * mip: maxiumum intensity projection. Cast a ray and display the\n maximum value that was encountered.\n * additive: voxel colors are added along the view ray until\n the result is saturated.\n * iso: isosurface. Cast a ray until a certain threshold is\n encountered. At that location, lighning calculations are\n performed to give the visual appearance of a surface.\n * attenuated_mip: attenuated maxiumum intensity projection. Cast a\n ray and attenuate values based on integral of encountered values,\n display the maximum value that was encountered after attenuation.\n This will make nearer objects appear more prominent.\n \"\"\"\n if isinstance(self.node, VolumeNode):\n self.node.method = rendering\n if Rendering(rendering) == Rendering.ISO:\n self.node.threshold = float(self._iso_threshold)\n elif Rendering(rendering) == Rendering.ATTENUATED_MIP:\n self.node.threshold = float(self._attenuation)\n\n def _on_colormap_change(self, colormap):\n \"\"\"Receive layer model colormap change event and update the visual.\n\n Parameters\n ----------\n colormap : str or tuple\n Colormap name or tuple of (name, vispy.color.Colormap).\n \"\"\"\n name, cmap = ensure_colormap_tuple(colormap)\n # Once #1842 and #1844 from vispy are released and gamma adjustment is\n # done on the GPU this can be dropped\n self._raw_cmap = cmap\n if self._gamma != 1:\n # when gamma!=1, we instantiate a new colormap with 256 control\n # points from 0-1\n node_cmap = Colormap(cmap[np.linspace(0, 1, 256) ** self._gamma])\n else:\n node_cmap = cmap\n self.node.cmap = node_cmap\n\n def _on_contrast_limits_change(self, contrast_limits):\n \"\"\"Receive layer model contrast limits change event and update visual.\n\n Parameters\n ----------\n contrast_limits : tuple\n Contrast limits.\n \"\"\"\n # Once #1842 from vispy is released this if else can be dropped\n if isinstance(self.node, VolumeNode):\n self._on_slice_data_change()\n else:\n self.node.clim = contrast_limits\n\n def _on_gamma_change(self, gamma):\n \"\"\"Receive the layer model gamma change event and update the visual.\n\n Parameters\n ----------\n gamma : float\n Gamma value.\n \"\"\"\n # Once #1842 and #1844 from vispy are released and gamma adjustment is\n # done on the GPU this can be dropped\n if gamma != 1:\n # when gamma!=1, we instantiate a new colormap with 256 control\n # points from 0-1\n cmap = Colormap(self._raw_cmap[np.linspace(0, 1, 256) ** gamma])\n else:\n cmap = self._raw_cmap\n self._gamma = gamma\n self.node.cmap = cmap\n\n def _on_iso_threshold_change(self, iso_threshold):\n \"\"\"Receive layer model isosurface change event and update the visual.\n\n Parameters\n ----------\n iso_threshold : float\n Iso surface threshold value, between 0 and 1.\n \"\"\"\n if (\n isinstance(self.node, VolumeNode)\n and Rendering(self.node.method) == Rendering.ISO\n ):\n self._iso_threshold = iso_threshold\n self.node.threshold = float(iso_threshold)\n\n def _on_attenuation_change(self, attenuation):\n \"\"\"Receive layer model attenuation change event and update the visual.\n\n Parameters\n ----------\n attenuation : float\n Attenuation value, between 0 and 2.\n \"\"\"\n if (\n isinstance(self.node, VolumeNode)\n and Rendering(self.node.method) == Rendering.ATTENUATED_MIP\n ):\n self._attenuation = attenuation\n self.node.threshold = float(attenuation)\n\n def reset(self, event=None):\n self._reset_base()\n self._on_colormap_change(self.layer.colormap)\n self._on_rendering_change(self.layer.rendering)\n if isinstance(self.node, ImageNode):\n self._on_contrast_limits_change(self.layer.contrast_limits)\n\n def downsample_texture(self, data, MAX_TEXTURE_SIZE):\n \"\"\"Downsample data based on maximum allowed texture size.\n\n Parameters\n ----------\n data : array\n Data to be downsampled if needed.\n MAX_TEXTURE_SIZE : int\n Maximum allowed texture size.\n\n Returns\n -------\n data : array\n Data that now fits inside texture.\n \"\"\"\n if np.any(np.greater(data.shape, MAX_TEXTURE_SIZE)):\n if self.layer.multiscale:\n raise ValueError(\n f\"Shape of individual tiles in multiscale {data.shape} \"\n f\"cannot exceed GL_MAX_TEXTURE_SIZE \"\n f\"{MAX_TEXTURE_SIZE}. Rendering is currently in \"\n f\"{self.layer.dims.ndisplay}D mode.\"\n )\n warnings.warn(\n f\"data shape {data.shape} exceeds GL_MAX_TEXTURE_SIZE \"\n f\"{MAX_TEXTURE_SIZE} in at least one axis and \"\n f\"will be downsampled. Rendering is currently in \"\n f\"{self.layer.dims.ndisplay}D mode.\"\n )\n downsample = np.ceil(\n np.divide(data.shape, MAX_TEXTURE_SIZE)\n ).astype(int)\n scale = np.ones(self.layer.ndim)\n for i, d in enumerate(self.layer.dims.displayed):\n scale[d] = downsample[i]\n self.layer._transforms['tile2data'].scale = scale\n self._on_scale_change()\n slices = tuple(slice(None, None, ds) for ds in downsample)\n data = data[slices]\n return data\n", "path": "napari/_vispy/vispy_image_layer.py"}]} | 3,547 | 328 |
gh_patches_debug_31286 | rasdani/github-patches | git_diff | ocf__ocfweb-57 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Home page should have a link to password reset, check print quota, view print queue
</issue>
<code>
[start of ocfweb/context_processors.py]
1 import re
2 from datetime import date
3 from datetime import datetime
4
5 from ocflib.lab.hours import DayHours
6
7 from ocfweb.component.lab_status import get_lab_status
8
9
10 def ocf_template_processor(request):
11 now = datetime.now()
12 today = date.today()
13 hours = DayHours.from_date(today)
14
15 base_css_classes = []
16 if request.resolver_match.url_name:
17 page_class = 'page-' + request.resolver_match.url_name
18 base_css_classes.append(page_class)
19
20 for arg in request.resolver_match.args:
21 page_class += '-' + re.sub('[^a-zA-Z_\-]', '-', arg)
22 base_css_classes.append(page_class)
23
24 return {
25 'lab_is_open': hours.is_open(now),
26 'current_lab_hours': hours,
27 'lab_status': get_lab_status(),
28 'base_css_classes': ' '.join(base_css_classes),
29 }
30
[end of ocfweb/context_processors.py]
[start of setup.py]
1 from setuptools import find_packages
2 from setuptools import setup
3
4 try:
5 with open('.version') as f:
6 VERSION = f.readline().strip()
7 except IOError:
8 VERSION = 'unknown'
9
10 setup(
11 name='ocfweb',
12 version=VERSION,
13 packages=find_packages(exclude=['debian', 'virtualenv_run']),
14 include_package_data=True,
15 url='https://www.ocf.berkeley.edu/',
16 author='Open Computing Facility',
17 author_email='[email protected]',
18 install_requires=[
19 'cachetools',
20 'django>=1.8,<1.8.999',
21 'gunicorn',
22 'libsass',
23 'lxml',
24 'mistune',
25 'ocflib',
26 'pygments',
27 'python-dateutil',
28 ],
29 sass_manifests={
30 'ocfweb': ('static/scss',),
31 },
32 )
33
[end of setup.py]
[start of ocfweb/main/home.py]
1 from datetime import date
2 from datetime import timedelta
3
4 from django.shortcuts import render_to_response
5 from django.template import RequestContext
6 from django.utils import timezone
7 from ocflib.lab.hours import get_hours
8 from ocflib.lab.staff_hours import get_staff_hours_soonest_first
9
10 from ocfweb.component.blog import get_blog_posts
11 from ocfweb.component.lab_status import get_lab_status
12
13
14 def home(request):
15 hours = [
16 get_hours(date.today() + timedelta(days=i)) for i in range(5)
17 ]
18
19 blog_posts = [
20 post for post
21 in get_blog_posts()
22 if timezone.now() - post.published < timedelta(days=365)
23 ][:2]
24
25 return render_to_response(
26 'home.html',
27 {
28 'fulltitle': 'Open Computing Facility at UC Berkeley',
29 'description': (
30 'The Open Computing Facility is an all-volunteer student '
31 'organization dedicated to free and open-source computing for all UC '
32 'Berkeley students.'''
33 ),
34 'staff_hours': get_staff_hours_soonest_first()[:2],
35 'hours': hours,
36 'today': hours[0],
37 'blog_posts': blog_posts,
38 'lab_status': get_lab_status(),
39 },
40 context_instance=RequestContext(request),
41 )
42
[end of ocfweb/main/home.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ocfweb/context_processors.py b/ocfweb/context_processors.py
--- a/ocfweb/context_processors.py
+++ b/ocfweb/context_processors.py
@@ -1,12 +1,20 @@
import re
from datetime import date
from datetime import datetime
+from ipaddress import ip_address
+from ipaddress import ip_network
+from ipware.ip import get_real_ip
from ocflib.lab.hours import DayHours
from ocfweb.component.lab_status import get_lab_status
+def is_ocf_ip(ip):
+ # TODO: move this to ocflib when it drops Python 3.2 support
+ return ip_address(ip) in ip_network('169.229.10.0/24')
+
+
def ocf_template_processor(request):
now = datetime.now()
today = date.today()
@@ -21,9 +29,12 @@
page_class += '-' + re.sub('[^a-zA-Z_\-]', '-', arg)
base_css_classes.append(page_class)
+ real_ip = get_real_ip(request)
+
return {
'lab_is_open': hours.is_open(now),
'current_lab_hours': hours,
'lab_status': get_lab_status(),
'base_css_classes': ' '.join(base_css_classes),
+ 'is_ocf_ip': is_ocf_ip(real_ip) if real_ip else True,
}
diff --git a/ocfweb/main/home.py b/ocfweb/main/home.py
--- a/ocfweb/main/home.py
+++ b/ocfweb/main/home.py
@@ -13,7 +13,7 @@
def home(request):
hours = [
- get_hours(date.today() + timedelta(days=i)) for i in range(5)
+ get_hours(date.today() + timedelta(days=i)) for i in range(3)
]
blog_posts = [
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,6 +18,7 @@
install_requires=[
'cachetools',
'django>=1.8,<1.8.999',
+ 'django-ipware',
'gunicorn',
'libsass',
'lxml',
| {"golden_diff": "diff --git a/ocfweb/context_processors.py b/ocfweb/context_processors.py\n--- a/ocfweb/context_processors.py\n+++ b/ocfweb/context_processors.py\n@@ -1,12 +1,20 @@\n import re\n from datetime import date\n from datetime import datetime\n+from ipaddress import ip_address\n+from ipaddress import ip_network\n \n+from ipware.ip import get_real_ip\n from ocflib.lab.hours import DayHours\n \n from ocfweb.component.lab_status import get_lab_status\n \n \n+def is_ocf_ip(ip):\n+ # TODO: move this to ocflib when it drops Python 3.2 support\n+ return ip_address(ip) in ip_network('169.229.10.0/24')\n+\n+\n def ocf_template_processor(request):\n now = datetime.now()\n today = date.today()\n@@ -21,9 +29,12 @@\n page_class += '-' + re.sub('[^a-zA-Z_\\-]', '-', arg)\n base_css_classes.append(page_class)\n \n+ real_ip = get_real_ip(request)\n+\n return {\n 'lab_is_open': hours.is_open(now),\n 'current_lab_hours': hours,\n 'lab_status': get_lab_status(),\n 'base_css_classes': ' '.join(base_css_classes),\n+ 'is_ocf_ip': is_ocf_ip(real_ip) if real_ip else True,\n }\ndiff --git a/ocfweb/main/home.py b/ocfweb/main/home.py\n--- a/ocfweb/main/home.py\n+++ b/ocfweb/main/home.py\n@@ -13,7 +13,7 @@\n \n def home(request):\n hours = [\n- get_hours(date.today() + timedelta(days=i)) for i in range(5)\n+ get_hours(date.today() + timedelta(days=i)) for i in range(3)\n ]\n \n blog_posts = [\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,6 +18,7 @@\n install_requires=[\n 'cachetools',\n 'django>=1.8,<1.8.999',\n+ 'django-ipware',\n 'gunicorn',\n 'libsass',\n 'lxml',\n", "issue": "Home page should have a link to password reset, check print quota, view print queue\n\n", "before_files": [{"content": "import re\nfrom datetime import date\nfrom datetime import datetime\n\nfrom ocflib.lab.hours import DayHours\n\nfrom ocfweb.component.lab_status import get_lab_status\n\n\ndef ocf_template_processor(request):\n now = datetime.now()\n today = date.today()\n hours = DayHours.from_date(today)\n\n base_css_classes = []\n if request.resolver_match.url_name:\n page_class = 'page-' + request.resolver_match.url_name\n base_css_classes.append(page_class)\n\n for arg in request.resolver_match.args:\n page_class += '-' + re.sub('[^a-zA-Z_\\-]', '-', arg)\n base_css_classes.append(page_class)\n\n return {\n 'lab_is_open': hours.is_open(now),\n 'current_lab_hours': hours,\n 'lab_status': get_lab_status(),\n 'base_css_classes': ' '.join(base_css_classes),\n }\n", "path": "ocfweb/context_processors.py"}, {"content": "from setuptools import find_packages\nfrom setuptools import setup\n\ntry:\n with open('.version') as f:\n VERSION = f.readline().strip()\nexcept IOError:\n VERSION = 'unknown'\n\nsetup(\n name='ocfweb',\n version=VERSION,\n packages=find_packages(exclude=['debian', 'virtualenv_run']),\n include_package_data=True,\n url='https://www.ocf.berkeley.edu/',\n author='Open Computing Facility',\n author_email='[email protected]',\n install_requires=[\n 'cachetools',\n 'django>=1.8,<1.8.999',\n 'gunicorn',\n 'libsass',\n 'lxml',\n 'mistune',\n 'ocflib',\n 'pygments',\n 'python-dateutil',\n ],\n sass_manifests={\n 'ocfweb': ('static/scss',),\n },\n)\n", "path": "setup.py"}, {"content": "from datetime import date\nfrom datetime import timedelta\n\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom django.utils import timezone\nfrom ocflib.lab.hours import get_hours\nfrom ocflib.lab.staff_hours import get_staff_hours_soonest_first\n\nfrom ocfweb.component.blog import get_blog_posts\nfrom ocfweb.component.lab_status import get_lab_status\n\n\ndef home(request):\n hours = [\n get_hours(date.today() + timedelta(days=i)) for i in range(5)\n ]\n\n blog_posts = [\n post for post\n in get_blog_posts()\n if timezone.now() - post.published < timedelta(days=365)\n ][:2]\n\n return render_to_response(\n 'home.html',\n {\n 'fulltitle': 'Open Computing Facility at UC Berkeley',\n 'description': (\n 'The Open Computing Facility is an all-volunteer student '\n 'organization dedicated to free and open-source computing for all UC '\n 'Berkeley students.'''\n ),\n 'staff_hours': get_staff_hours_soonest_first()[:2],\n 'hours': hours,\n 'today': hours[0],\n 'blog_posts': blog_posts,\n 'lab_status': get_lab_status(),\n },\n context_instance=RequestContext(request),\n )\n", "path": "ocfweb/main/home.py"}]} | 1,429 | 498 |
gh_patches_debug_16219 | rasdani/github-patches | git_diff | getsentry__sentry-5339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Localhost filter should consider affected URL
Right now the "localhost" filter only consider's the affected User's IP: https://github.com/getsentry/sentry/blob/master/src/sentry/filters/localhost.py
But users are also expecting that this should filter server errors triggered from a server running on localhost (e.g. local development).
See also: #4729, #4762
</issue>
<code>
[start of src/sentry/filters/localhost.py]
1 from __future__ import absolute_import
2
3 from .base import Filter
4
5 LOCAL_IPS = frozenset(['127.0.0.1', '::1'])
6
7
8 class LocalhostFilter(Filter):
9 id = 'localhost'
10 name = 'Filter out errors coming from localhost'
11 description = 'This applies to to both IPv4 (``127.0.0.1``) and IPv6 (``::1``) addresses.'
12
13 def get_ip_address(self, data):
14 try:
15 return data['sentry.interfaces.User']['ip_address']
16 except KeyError:
17 return ''
18
19 def test(self, data):
20 return self.get_ip_address(data) in LOCAL_IPS
21
[end of src/sentry/filters/localhost.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/filters/localhost.py b/src/sentry/filters/localhost.py
--- a/src/sentry/filters/localhost.py
+++ b/src/sentry/filters/localhost.py
@@ -1,8 +1,10 @@
from __future__ import absolute_import
from .base import Filter
+from six.moves.urllib.parse import urlparse
LOCAL_IPS = frozenset(['127.0.0.1', '::1'])
+LOCAL_DOMAINS = frozenset(['127.0.0.1', 'localhost'])
class LocalhostFilter(Filter):
@@ -16,5 +18,14 @@
except KeyError:
return ''
+ def get_url(self, data):
+ try:
+ return data['sentry.interfaces.Http']['url'] or ''
+ except KeyError:
+ return ''
+
+ def get_domain(self, data):
+ return urlparse(self.get_url(data)).netloc
+
def test(self, data):
- return self.get_ip_address(data) in LOCAL_IPS
+ return self.get_ip_address(data) in LOCAL_IPS or self.get_domain(data) in LOCAL_DOMAINS
| {"golden_diff": "diff --git a/src/sentry/filters/localhost.py b/src/sentry/filters/localhost.py\n--- a/src/sentry/filters/localhost.py\n+++ b/src/sentry/filters/localhost.py\n@@ -1,8 +1,10 @@\n from __future__ import absolute_import\n \n from .base import Filter\n+from six.moves.urllib.parse import urlparse\n \n LOCAL_IPS = frozenset(['127.0.0.1', '::1'])\n+LOCAL_DOMAINS = frozenset(['127.0.0.1', 'localhost'])\n \n \n class LocalhostFilter(Filter):\n@@ -16,5 +18,14 @@\n except KeyError:\n return ''\n \n+ def get_url(self, data):\n+ try:\n+ return data['sentry.interfaces.Http']['url'] or ''\n+ except KeyError:\n+ return ''\n+\n+ def get_domain(self, data):\n+ return urlparse(self.get_url(data)).netloc\n+\n def test(self, data):\n- return self.get_ip_address(data) in LOCAL_IPS\n+ return self.get_ip_address(data) in LOCAL_IPS or self.get_domain(data) in LOCAL_DOMAINS\n", "issue": "Localhost filter should consider affected URL\nRight now the \"localhost\" filter only consider's the affected User's IP: https://github.com/getsentry/sentry/blob/master/src/sentry/filters/localhost.py\r\n\r\nBut users are also expecting that this should filter server errors triggered from a server running on localhost (e.g. local development).\r\n\r\nSee also: #4729, #4762\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom .base import Filter\n\nLOCAL_IPS = frozenset(['127.0.0.1', '::1'])\n\n\nclass LocalhostFilter(Filter):\n id = 'localhost'\n name = 'Filter out errors coming from localhost'\n description = 'This applies to to both IPv4 (``127.0.0.1``) and IPv6 (``::1``) addresses.'\n\n def get_ip_address(self, data):\n try:\n return data['sentry.interfaces.User']['ip_address']\n except KeyError:\n return ''\n\n def test(self, data):\n return self.get_ip_address(data) in LOCAL_IPS\n", "path": "src/sentry/filters/localhost.py"}]} | 809 | 254 |
gh_patches_debug_51927 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1683 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Handle www. prefix when input in /availability API
[PR#100 in getgov-home for additional context
](https://github.com/cisagov/getgov-home/pull/100)
Handle edge case of including [www](http://www/). in the search input. This is most likely best handled by updating the manage.get.gov's availability endpoint to handle the [www](http://www/). prefix when parsing arguments, similarly to how the API handles the .gov suffix.
Per Katherine:
I envisioned that it would strip out the "www." when checking availability. So the confirmation message for "[www.example.gov](http://www.example.gov/)" would read: "[example.gov](http://example.gov/) is not available" Is that what you were thinking,
Example:
if [example.gov](http://example.gov/) was not available to begin with. I would think yes it strips www. then if [example.gov](http://example.gov/) is already taken it says “[example.gov](http://example.gov/) is not available”. If [example.gov](http://example.gov/) is actually available then entering [www.example.gov](http://www.example.gov/) would result in “[example.gov](http://example.gov/) is available”. Basically have it just ignore a www. at the start.
_Originally posted by @erinysong in https://github.com/cisagov/manage.get.gov/issues/476#issuecomment-1802870748_
[Slack thread](https://cisa-corp.slack.com/archives/C05BDEA3C11/p1705599697584059)
</issue>
<code>
[start of src/registrar/models/utility/domain_helper.py]
1 import re
2
3 from django import forms
4 from django.http import JsonResponse
5
6 from api.views import DOMAIN_API_MESSAGES, check_domain_available
7 from registrar.utility import errors
8 from epplibwrapper.errors import RegistryError
9 from registrar.utility.enums import ValidationReturnType
10
11
12 class DomainHelper:
13 """Utility functions and constants for domain names."""
14
15 # a domain name is alphanumeric or hyphen, up to 63 characters, doesn't
16 # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters
17 DOMAIN_REGEX = re.compile(r"^(?!-)[A-Za-z0-9-]{1,63}(?<!-)\.[A-Za-z]{2,6}$")
18
19 # a domain can be no longer than 253 characters in total
20 MAX_LENGTH = 253
21
22 @classmethod
23 def string_could_be_domain(cls, domain: str | None) -> bool:
24 """Return True if the string could be a domain name, otherwise False."""
25 if not isinstance(domain, str):
26 return False
27 return bool(cls.DOMAIN_REGEX.match(domain))
28
29 @classmethod
30 def validate(cls, domain: str, blank_ok=False) -> str:
31 """Attempt to determine if a domain name could be requested."""
32
33 # Split into pieces for the linter
34 domain = cls._validate_domain_string(domain, blank_ok)
35
36 try:
37 if not check_domain_available(domain):
38 raise errors.DomainUnavailableError()
39 except RegistryError as err:
40 raise errors.RegistrySystemError() from err
41 return domain
42
43 @staticmethod
44 def _validate_domain_string(domain, blank_ok):
45 """Normalize the domain string, and check its content"""
46 if domain is None:
47 raise errors.BlankValueError()
48
49 if not isinstance(domain, str):
50 raise errors.InvalidDomainError()
51
52 domain = domain.lower().strip()
53
54 if domain == "" and not blank_ok:
55 raise errors.BlankValueError()
56 elif domain == "":
57 # If blank ok is true, just return the domain
58 return domain
59
60 if domain.endswith(".gov"):
61 domain = domain[:-4]
62
63 if "." in domain:
64 raise errors.ExtraDotsError()
65
66 if not DomainHelper.string_could_be_domain(domain + ".gov"):
67 raise errors.InvalidDomainError()
68
69 return domain
70
71 @classmethod
72 def validate_and_handle_errors(cls, domain, return_type, blank_ok=False):
73 """
74 Validates a domain and returns an appropriate response based on the validation result.
75
76 This method uses the `validate` method to validate the domain. If validation fails, it catches the exception,
77 maps it to a corresponding error code, and returns a response based on the `return_type` parameter.
78
79 Args:
80 domain (str): The domain to validate.
81 return_type (ValidationReturnType): Determines the type of response (JSON or form validation error).
82 blank_ok (bool, optional): If True, blank input does not raise an exception. Defaults to False.
83
84 Returns:
85 tuple: The validated domain (or None if validation failed), and the response (success or error).
86 """ # noqa
87
88 # Map each exception to a corresponding error code
89 error_map = {
90 errors.BlankValueError: "required",
91 errors.ExtraDotsError: "extra_dots",
92 errors.DomainUnavailableError: "unavailable",
93 errors.RegistrySystemError: "error",
94 errors.InvalidDomainError: "invalid",
95 }
96
97 validated = None
98 response = None
99
100 try:
101 # Attempt to validate the domain
102 validated = cls.validate(domain, blank_ok)
103
104 # Get a list of each possible exception, and the code to return
105 except tuple(error_map.keys()) as error:
106 # If an error is caught, get its type
107 error_type = type(error)
108
109 # Generate the response based on the error code and return type
110 response = DomainHelper._return_form_error_or_json_response(return_type, code=error_map.get(error_type))
111 else:
112 # For form validation, we do not need to display the success message
113 if return_type != ValidationReturnType.FORM_VALIDATION_ERROR:
114 response = DomainHelper._return_form_error_or_json_response(return_type, code="success", available=True)
115
116 # Return the validated domain and the response (either error or success)
117 return (validated, response)
118
119 @staticmethod
120 def _return_form_error_or_json_response(return_type: ValidationReturnType, code, available=False):
121 """
122 Returns an error response based on the `return_type`.
123
124 If `return_type` is `FORM_VALIDATION_ERROR`, raises a form validation error.
125 If `return_type` is `JSON_RESPONSE`, returns a JSON response with 'available', 'code', and 'message' fields.
126 If `return_type` is neither, raises a ValueError.
127
128 Args:
129 return_type (ValidationReturnType): The type of error response.
130 code (str): The error code for the error message.
131 available (bool, optional): Availability, only used for JSON responses. Defaults to False.
132
133 Returns:
134 A JSON response or a form validation error.
135
136 Raises:
137 ValueError: If `return_type` is neither `FORM_VALIDATION_ERROR` nor `JSON_RESPONSE`.
138 """ # noqa
139 match return_type:
140 case ValidationReturnType.FORM_VALIDATION_ERROR:
141 raise forms.ValidationError(DOMAIN_API_MESSAGES[code], code=code)
142 case ValidationReturnType.JSON_RESPONSE:
143 return JsonResponse({"available": available, "code": code, "message": DOMAIN_API_MESSAGES[code]})
144 case _:
145 raise ValueError("Invalid return type specified")
146
147 @classmethod
148 def sld(cls, domain: str):
149 """
150 Get the second level domain. Example: `gsa.gov` -> `gsa`.
151
152 If no TLD is present, returns the original string.
153 """
154 return domain.split(".")[0]
155
156 @classmethod
157 def tld(cls, domain: str):
158 """Get the top level domain. Example: `gsa.gov` -> `gov`."""
159 parts = domain.rsplit(".")
160 return parts[-1] if len(parts) > 1 else ""
161
[end of src/registrar/models/utility/domain_helper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/registrar/models/utility/domain_helper.py b/src/registrar/models/utility/domain_helper.py
--- a/src/registrar/models/utility/domain_helper.py
+++ b/src/registrar/models/utility/domain_helper.py
@@ -57,6 +57,9 @@
# If blank ok is true, just return the domain
return domain
+ if domain.startswith("www."):
+ domain = domain[4:]
+
if domain.endswith(".gov"):
domain = domain[:-4]
| {"golden_diff": "diff --git a/src/registrar/models/utility/domain_helper.py b/src/registrar/models/utility/domain_helper.py\n--- a/src/registrar/models/utility/domain_helper.py\n+++ b/src/registrar/models/utility/domain_helper.py\n@@ -57,6 +57,9 @@\n # If blank ok is true, just return the domain\n return domain\n \n+ if domain.startswith(\"www.\"):\n+ domain = domain[4:]\n+\n if domain.endswith(\".gov\"):\n domain = domain[:-4]\n", "issue": "Handle www. prefix when input in /availability API\n[PR#100 in getgov-home for additional context\r\n](https://github.com/cisagov/getgov-home/pull/100)\r\nHandle edge case of including [www](http://www/). in the search input. This is most likely best handled by updating the manage.get.gov's availability endpoint to handle the [www](http://www/). prefix when parsing arguments, similarly to how the API handles the .gov suffix.\r\n\r\nPer Katherine:\r\n I envisioned that it would strip out the \"www.\" when checking availability. So the confirmation message for \"[www.example.gov](http://www.example.gov/)\" would read: \"[example.gov](http://example.gov/) is not available\" Is that what you were thinking,\r\n\r\nExample: \r\n if [example.gov](http://example.gov/) was not available to begin with. I would think yes it strips www. then if [example.gov](http://example.gov/) is already taken it says \u201c[example.gov](http://example.gov/) is not available\u201d. If [example.gov](http://example.gov/) is actually available then entering [www.example.gov](http://www.example.gov/) would result in \u201c[example.gov](http://example.gov/) is available\u201d. Basically have it just ignore a www. at the start.\r\n\r\n_Originally posted by @erinysong in https://github.com/cisagov/manage.get.gov/issues/476#issuecomment-1802870748_\r\n \r\n[Slack thread](https://cisa-corp.slack.com/archives/C05BDEA3C11/p1705599697584059)\n", "before_files": [{"content": "import re\n\nfrom django import forms\nfrom django.http import JsonResponse\n\nfrom api.views import DOMAIN_API_MESSAGES, check_domain_available\nfrom registrar.utility import errors\nfrom epplibwrapper.errors import RegistryError\nfrom registrar.utility.enums import ValidationReturnType\n\n\nclass DomainHelper:\n \"\"\"Utility functions and constants for domain names.\"\"\"\n\n # a domain name is alphanumeric or hyphen, up to 63 characters, doesn't\n # begin or end with a hyphen, followed by a TLD of 2-6 alphabetic characters\n DOMAIN_REGEX = re.compile(r\"^(?!-)[A-Za-z0-9-]{1,63}(?<!-)\\.[A-Za-z]{2,6}$\")\n\n # a domain can be no longer than 253 characters in total\n MAX_LENGTH = 253\n\n @classmethod\n def string_could_be_domain(cls, domain: str | None) -> bool:\n \"\"\"Return True if the string could be a domain name, otherwise False.\"\"\"\n if not isinstance(domain, str):\n return False\n return bool(cls.DOMAIN_REGEX.match(domain))\n\n @classmethod\n def validate(cls, domain: str, blank_ok=False) -> str:\n \"\"\"Attempt to determine if a domain name could be requested.\"\"\"\n\n # Split into pieces for the linter\n domain = cls._validate_domain_string(domain, blank_ok)\n\n try:\n if not check_domain_available(domain):\n raise errors.DomainUnavailableError()\n except RegistryError as err:\n raise errors.RegistrySystemError() from err\n return domain\n\n @staticmethod\n def _validate_domain_string(domain, blank_ok):\n \"\"\"Normalize the domain string, and check its content\"\"\"\n if domain is None:\n raise errors.BlankValueError()\n\n if not isinstance(domain, str):\n raise errors.InvalidDomainError()\n\n domain = domain.lower().strip()\n\n if domain == \"\" and not blank_ok:\n raise errors.BlankValueError()\n elif domain == \"\":\n # If blank ok is true, just return the domain\n return domain\n\n if domain.endswith(\".gov\"):\n domain = domain[:-4]\n\n if \".\" in domain:\n raise errors.ExtraDotsError()\n\n if not DomainHelper.string_could_be_domain(domain + \".gov\"):\n raise errors.InvalidDomainError()\n\n return domain\n\n @classmethod\n def validate_and_handle_errors(cls, domain, return_type, blank_ok=False):\n \"\"\"\n Validates a domain and returns an appropriate response based on the validation result.\n\n This method uses the `validate` method to validate the domain. If validation fails, it catches the exception,\n maps it to a corresponding error code, and returns a response based on the `return_type` parameter.\n\n Args:\n domain (str): The domain to validate.\n return_type (ValidationReturnType): Determines the type of response (JSON or form validation error).\n blank_ok (bool, optional): If True, blank input does not raise an exception. Defaults to False.\n\n Returns:\n tuple: The validated domain (or None if validation failed), and the response (success or error).\n \"\"\" # noqa\n\n # Map each exception to a corresponding error code\n error_map = {\n errors.BlankValueError: \"required\",\n errors.ExtraDotsError: \"extra_dots\",\n errors.DomainUnavailableError: \"unavailable\",\n errors.RegistrySystemError: \"error\",\n errors.InvalidDomainError: \"invalid\",\n }\n\n validated = None\n response = None\n\n try:\n # Attempt to validate the domain\n validated = cls.validate(domain, blank_ok)\n\n # Get a list of each possible exception, and the code to return\n except tuple(error_map.keys()) as error:\n # If an error is caught, get its type\n error_type = type(error)\n\n # Generate the response based on the error code and return type\n response = DomainHelper._return_form_error_or_json_response(return_type, code=error_map.get(error_type))\n else:\n # For form validation, we do not need to display the success message\n if return_type != ValidationReturnType.FORM_VALIDATION_ERROR:\n response = DomainHelper._return_form_error_or_json_response(return_type, code=\"success\", available=True)\n\n # Return the validated domain and the response (either error or success)\n return (validated, response)\n\n @staticmethod\n def _return_form_error_or_json_response(return_type: ValidationReturnType, code, available=False):\n \"\"\"\n Returns an error response based on the `return_type`.\n\n If `return_type` is `FORM_VALIDATION_ERROR`, raises a form validation error.\n If `return_type` is `JSON_RESPONSE`, returns a JSON response with 'available', 'code', and 'message' fields.\n If `return_type` is neither, raises a ValueError.\n\n Args:\n return_type (ValidationReturnType): The type of error response.\n code (str): The error code for the error message.\n available (bool, optional): Availability, only used for JSON responses. Defaults to False.\n\n Returns:\n A JSON response or a form validation error.\n\n Raises:\n ValueError: If `return_type` is neither `FORM_VALIDATION_ERROR` nor `JSON_RESPONSE`.\n \"\"\" # noqa\n match return_type:\n case ValidationReturnType.FORM_VALIDATION_ERROR:\n raise forms.ValidationError(DOMAIN_API_MESSAGES[code], code=code)\n case ValidationReturnType.JSON_RESPONSE:\n return JsonResponse({\"available\": available, \"code\": code, \"message\": DOMAIN_API_MESSAGES[code]})\n case _:\n raise ValueError(\"Invalid return type specified\")\n\n @classmethod\n def sld(cls, domain: str):\n \"\"\"\n Get the second level domain. Example: `gsa.gov` -> `gsa`.\n\n If no TLD is present, returns the original string.\n \"\"\"\n return domain.split(\".\")[0]\n\n @classmethod\n def tld(cls, domain: str):\n \"\"\"Get the top level domain. Example: `gsa.gov` -> `gov`.\"\"\"\n parts = domain.rsplit(\".\")\n return parts[-1] if len(parts) > 1 else \"\"\n", "path": "src/registrar/models/utility/domain_helper.py"}]} | 2,597 | 105 |
gh_patches_debug_1182 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-1049 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
efs tag support
I am finding that searching for tagging of EFS resources does not consistently report the correct results. It did find an EFS that was incorrectly tagged, but after it was corrected it continues to report the same resource. I use the same filter for other resource types and do not see this behavior.
```
- name: efs-tag-compliance
resource: efs
description:
Notify if an EFS does not comply with tagging best practices.
mode:
type: periodic
schedule: "rate(24 hours)"
role: arn:aws:iam::MYACCOUNT:role/cloud-custodian
filters:
- or:
- "tag:CostCenter": absent
- "tag:POC": absent
- "tag:Service": absent
- "tag:Name": absent
...
```
</issue>
<code>
[start of c7n/resources/efs.py]
1 # Copyright 2016 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from c7n.actions import Action
15 from c7n.manager import resources
16 from c7n.query import QueryResourceManager
17 from c7n.utils import local_session, type_schema, get_retry
18
19
20 @resources.register('efs')
21 class ElasticFileSystem(QueryResourceManager):
22
23 class resource_type(object):
24 service = 'efs'
25 enum_spec = ('describe_file_systems', 'FileSystems', None)
26 id = 'FileSystemId'
27 name = 'Name'
28 date = 'CreationTime'
29 dimension = None
30
31
32 @ElasticFileSystem.action_registry.register('delete')
33 class Delete(Action):
34
35 schema = type_schema('delete')
36 permissions = ('efs:DescribeMountTargets',
37 'efs:DeleteMountTargets',
38 'efs:DeleteFileSystem')
39
40 def process(self, resources):
41 client = local_session(self.manager.session_factory).client('efs')
42 self.unmount_filesystems(resources)
43 retry = get_retry(('FileSystemInUse',), 12)
44 for r in resources:
45 retry(client.delete_file_system, FileSystemId=r['FileSystemId'])
46
47 def unmount_filesystems(self, resources):
48 client = local_session(self.manager.session_factory).client('efs')
49 for r in resources:
50 if not r['NumberOfMountTargets']:
51 continue
52 for t in client.describe_mount_targets(
53 FileSystemId=r['FileSystemId'])['MountTargets']:
54 client.delete_mount_target(MountTargetId=t['MountTargetId'])
55
[end of c7n/resources/efs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/c7n/resources/efs.py b/c7n/resources/efs.py
--- a/c7n/resources/efs.py
+++ b/c7n/resources/efs.py
@@ -27,6 +27,7 @@
name = 'Name'
date = 'CreationTime'
dimension = None
+ detail_spec = ('describe_tags', 'FileSystemId', 'FileSystemId', None)
@ElasticFileSystem.action_registry.register('delete')
| {"golden_diff": "diff --git a/c7n/resources/efs.py b/c7n/resources/efs.py\n--- a/c7n/resources/efs.py\n+++ b/c7n/resources/efs.py\n@@ -27,6 +27,7 @@\n name = 'Name'\n date = 'CreationTime'\n dimension = None\n+ detail_spec = ('describe_tags', 'FileSystemId', 'FileSystemId', None)\n \n \n @ElasticFileSystem.action_registry.register('delete')\n", "issue": "efs tag support\nI am finding that searching for tagging of EFS resources does not consistently report the correct results. It did find an EFS that was incorrectly tagged, but after it was corrected it continues to report the same resource. I use the same filter for other resource types and do not see this behavior.\r\n\r\n```\r\n- name: efs-tag-compliance\r\n resource: efs\r\n description:\r\n Notify if an EFS does not comply with tagging best practices.\r\n mode:\r\n type: periodic\r\n schedule: \"rate(24 hours)\"\r\n role: arn:aws:iam::MYACCOUNT:role/cloud-custodian\r\n filters:\r\n - or:\r\n - \"tag:CostCenter\": absent\r\n - \"tag:POC\": absent\r\n - \"tag:Service\": absent\r\n - \"tag:Name\": absent\r\n...\r\n```\n", "before_files": [{"content": "# Copyright 2016 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom c7n.actions import Action\nfrom c7n.manager import resources\nfrom c7n.query import QueryResourceManager\nfrom c7n.utils import local_session, type_schema, get_retry\n\n\[email protected]('efs')\nclass ElasticFileSystem(QueryResourceManager):\n\n class resource_type(object):\n service = 'efs'\n enum_spec = ('describe_file_systems', 'FileSystems', None)\n id = 'FileSystemId'\n name = 'Name'\n date = 'CreationTime'\n dimension = None\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('efs:DescribeMountTargets',\n 'efs:DeleteMountTargets',\n 'efs:DeleteFileSystem')\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n self.unmount_filesystems(resources)\n retry = get_retry(('FileSystemInUse',), 12)\n for r in resources:\n retry(client.delete_file_system, FileSystemId=r['FileSystemId'])\n\n def unmount_filesystems(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n for r in resources:\n if not r['NumberOfMountTargets']:\n continue\n for t in client.describe_mount_targets(\n FileSystemId=r['FileSystemId'])['MountTargets']:\n client.delete_mount_target(MountTargetId=t['MountTargetId'])\n", "path": "c7n/resources/efs.py"}]} | 1,259 | 100 |
gh_patches_debug_56095 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-4864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG]: colossalai 0.3.3 + torch 2.0.1 + baichuan-2 7b 训练保存 lr_scheduler 时会报 NotImplementedError 错
### 🐛 Describe the bug
用 colossalai 0.3.3 + torch 2.0.1 + baichuan-2 7b 训练保存 lr_scheduler 时 colossalai/nn/lr_scheduler/delayed.py 会报 NotImplementedError 错。
In [25]: lr_scheduler
Out[25]: <[colossalai.nn.lr](http://colossalai.nn.lr/)_scheduler.cosine.CosineAnnealingWarmupLR at 0x7f01cd616e00>
In [26]: booster.save_lr_scheduler(lr_scheduler, "/data/checkpoint/lr_scheduler")
```
in <module>:1
python3.10/site-packages/colossalai/booster/booster.py:308 in
save_lr_scheduler
305 lr_scheduler (LRScheduler): A lr scheduler boosted by Booster.
306 checkpoint (str): Path to the checkpoint. It must be a local file path.
307 """
❱ 308 self.checkpoint_io.save_lr_scheduler(lr_scheduler, checkpoint)
309
310 def load_lr_scheduler(self, lr_scheduler: LRScheduler, checkpoint: str) -> None:
311 """Load lr scheduler from checkpoint.
python3.10/site-packages/colossalai/booster/plugin/gemini_plugin.py:225
in save_lr_scheduler
222 Save model to checkpoint but only on master process.
223 """
224 if self.coordinator.is_master():
❱ 225 super().save_lr_scheduler(lr_scheduler, checkpoint)
226
227
228 class GeminiPlugin(DPPluginBase):
python3.10/site-packages/colossalai/checkpoint_io/checkpoint_io_base.py:
318 in save_lr_scheduler
315 lr_scheduler (LRScheduler): lr scheduler to be saved.
316 checkpoint: checkpoint path. The checkpoint path can only be a file path.
317 """
❱ 318 torch.save(lr_scheduler.state_dict(), checkpoint)
319
320 def load_lr_scheduler(self, lr_scheduler: LRScheduler, checkpoint: str):
321 """
python3.10/site-packages/colossalai/nn/lr_scheduler/delayed.py:93 in
state_dict
90 state_dict["after_scheduler_dict"] = state_dict["after_scheduler"].state_dic
91 del state_dict["after_scheduler"]
92 else:
❱ 93 raise NotImplementedError()
94 return state_dict
95
96 def get_lr(self):
```
进一步分析 lr_scheduler 里的信息
```
state_dict = {key: value for key, value in lr_scheduler.__dict__.items() if key not in "optimizer"}
# =>
{
'warmup_epochs': 2000,
'after_scheduler': <torch.optim.lr_scheduler.CosineAnnealingLR at 0x7f01cd6173a0>,
'finished': False,
'base_lrs': [0.0003],
'last_epoch': 1,
'verbose': False,
'_step_count': 2,
'_get_lr_called_within_step': False,
'_last_lr': [3e-07]
}
```
- 其中 after_scheduler 是 torch.optim.lr_scheduler.CosineAnnealingLR 的实例,而 torch.optim.lr_scheduler.CosineAnnealingLR 是继承的 LRScheduler,那么 after_scheduler 的父类是 LRScheduler
- _LRScheduler 是继承了 LRScheduler
- 而在 [save lr scheduler 时(delayed.py) 中](https://github.com/hpcaitech/ColossalAI/blob/822051d8884a46d4d8626330e21adfd6427c99a0/colossalai/nn/lr_scheduler/delayed.py#L88),是 `isinstance(state_dict['after_scheduler'], _LRScheduler)`
```
from torch.optim.lr_scheduler import _LRScheduler, LRScheduler
isinstance(state_dict['after_scheduler'], LRScheduler)
# => True
isinstance(state_dict['after_scheduler'], _LRScheduler)
# => False
```
**那这样,是否说明 应该用 `LRScheduler` 而不是 `_LRScheduler` 呢?**
注:baichuan-2 依赖 torch 2.0+,不能降到 2.0 以下(用 1.13 会报 TypeError: sdp_kernel() got an unexpected keyword argument 'enable_mem_efficient')
### Environment
- colossalai 0.3.3
- torch 2.0.1
- baichuan-2 7b
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/nn/lr_scheduler/delayed.py]
1 from torch.optim.lr_scheduler import _LRScheduler
2
3
4 class _enable_get_lr_call:
5 def __init__(self, o):
6 self.o = o
7
8 def __enter__(self):
9 self.o._get_lr_called_within_step = True
10 return self
11
12 def __exit__(self, type, value, traceback):
13 self.o._get_lr_called_within_step = False
14
15
16 class DelayerScheduler(_LRScheduler):
17 """Starts with a flat lr schedule until it reaches N epochs then applies
18 the specific scheduler (For example: ReduceLROnPlateau)
19
20 Args:
21 optimizer (:class:`torch.optim.Optimizer`): Wrapped optimizer.
22 delay_epochs (int): Number of epochs to keep the initial lr until starting applying the scheduler.
23 after_scheduler (:class:`torch.optim.lr_scheduler`): After target_epoch, use this scheduler.
24 last_epoch (int, optional): The index of last epoch, defaults to -1. When last_epoch=-1,
25 the schedule is started from the beginning or When last_epoch=-1, sets initial lr as lr.
26 """
27
28 def __init__(self, optimizer, delay_epochs, after_scheduler, last_epoch=-1):
29 if delay_epochs < 0:
30 raise ValueError(f"delay_epochs must >= 0, got {delay_epochs}")
31 self.delay_epochs = delay_epochs
32 self.after_scheduler = after_scheduler
33 self.finished = False
34 super().__init__(optimizer, last_epoch)
35
36 def state_dict(self):
37 state_dict = {key: value for key, value in self.__dict__.items() if key not in "optimizer"}
38 if isinstance(state_dict["after_scheduler"], _LRScheduler):
39 state_dict["after_scheduler_type"] = type(state_dict["after_scheduler"]).__name__
40 state_dict["after_scheduler_dict"] = state_dict["after_scheduler"].state_dict()
41 del state_dict["after_scheduler"]
42 else:
43 raise NotImplementedError()
44 return state_dict
45
46 def get_lr(self):
47 if self.last_epoch >= self.delay_epochs:
48 if not self.finished:
49 self.after_scheduler.base_lrs = self.base_lrs
50 self.finished = True
51 with _enable_get_lr_call(self.after_scheduler):
52 return self.after_scheduler.get_lr()
53
54 return self.base_lrs
55
56 def step(self, epoch=None):
57 if self.finished:
58 if epoch is None:
59 self.after_scheduler.step(None)
60 self._last_lr = self.after_scheduler.get_last_lr()
61 else:
62 self.after_scheduler.step(epoch - self.delay_epochs)
63 self._last_lr = self.after_scheduler.get_last_lr()
64 else:
65 return super(DelayerScheduler, self).step(epoch)
66
67
68 class WarmupScheduler(_LRScheduler):
69 """Starts with a linear warmup lr schedule until it reaches N epochs then applies
70 the specific scheduler (For example: ReduceLROnPlateau).
71
72 Args:
73 optimizer (:class:`torch.optim.Optimizer`): Wrapped optimizer.
74 warmup_epochs (int): Number of epochs to linearly warmup lr until starting applying the scheduler.
75 after_scheduler (:class:`torch.optim.lr_scheduler`): After target_epoch, use this scheduler.
76 last_epoch (int, optional): The index of last epoch, defaults to -1. When last_epoch=-1,
77 the schedule is started from the beginning or When last_epoch=-1, sets initial lr as lr.
78 """
79
80 def __init__(self, optimizer, warmup_epochs, after_scheduler, last_epoch=-1):
81 self.warmup_epochs = int(warmup_epochs)
82 self.after_scheduler = after_scheduler
83 self.finished = False
84 super().__init__(optimizer, last_epoch)
85
86 def state_dict(self):
87 state_dict = {key: value for key, value in self.__dict__.items() if key not in "optimizer"}
88 if isinstance(state_dict["after_scheduler"], _LRScheduler):
89 state_dict["after_scheduler_type"] = type(state_dict["after_scheduler"]).__name__
90 state_dict["after_scheduler_dict"] = state_dict["after_scheduler"].state_dict()
91 del state_dict["after_scheduler"]
92 else:
93 raise NotImplementedError()
94 return state_dict
95
96 def get_lr(self):
97 if self.last_epoch >= self.warmup_epochs:
98 if not self.finished:
99 self.after_scheduler.base_lrs = self.base_lrs
100 self.finished = True
101 return self.after_scheduler.get_lr()
102
103 return [(self.last_epoch + 1) / self.warmup_epochs * lr for lr in self.base_lrs]
104
105 def step(self, epoch=None):
106 if self.finished:
107 if epoch is None:
108 self.after_scheduler.step(None)
109 self._last_lr = self.after_scheduler.get_last_lr()
110 else:
111 self.after_scheduler.step(epoch - self.warmup_epochs)
112 self._last_lr = self.after_scheduler.get_last_lr()
113 else:
114 return super().step(epoch)
115
116
117 class WarmupDelayerScheduler(_LRScheduler):
118 """Starts with a linear warmup lr schedule until it reaches N epochs and a flat lr schedule
119 until it reaches M epochs then applies the specific scheduler (For example: ReduceLROnPlateau).
120
121 Args:
122 optimizer (:class:`torch.optim.Optimizer`): Wrapped optimizer.
123 warmup_epochs (int): Number of epochs to linearly warmup lr until starting applying the scheduler.
124 delay_epochs (int): Number of epochs to keep the initial lr until starting applying the scheduler.
125 after_scheduler (:class:`torch.optim.lr_scheduler`): After target_epoch, use this scheduler.
126 last_epoch (int, optional): The index of last epoch, defaults to -1. When last_epoch=-1,
127 the schedule is started from the beginning or When last_epoch=-1, sets initial lr as lr.
128 """
129
130 def __init__(self, optimizer, warmup_epochs, delay_epochs, after_scheduler, last_epoch=-1):
131 if delay_epochs < 0:
132 raise ValueError(f"delay_epochs must >= 0, got {delay_epochs}")
133 if warmup_epochs < 0:
134 raise ValueError(f"warmup_epochs must >= 0, got {warmup_epochs}")
135 self.warmup_epochs = warmup_epochs
136 self.delay_epochs = delay_epochs
137 self.after_scheduler = after_scheduler
138 self.finished = False
139 super().__init__(optimizer, last_epoch)
140
141 def state_dict(self):
142 state_dict = {key: value for key, value in self.__dict__.items() if key not in "optimizer"}
143 if isinstance(state_dict["after_scheduler"], _LRScheduler):
144 state_dict["after_scheduler_type"] = type(state_dict["after_scheduler"]).__name__
145 state_dict["after_scheduler_dict"] = state_dict["after_scheduler"].state_dict()
146 del state_dict["after_scheduler"]
147 else:
148 raise NotImplementedError()
149 return state_dict
150
151 def get_lr(self):
152 if self.last_epoch >= self.warmup_epochs + self.delay_epochs:
153 if not self.finished:
154 self.after_scheduler.base_lrs = self.base_lrs
155 # reset lr to base_lr
156 for group, base_lr in zip(self.optimizer.param_groups, self.base_lrs):
157 group["lr"] = base_lr
158 self.finished = True
159 with _enable_get_lr_call(self.after_scheduler):
160 return self.after_scheduler.get_lr()
161 elif self.last_epoch >= self.warmup_epochs:
162 return self.base_lrs
163
164 return [(self.last_epoch + 1) / self.warmup_epochs * lr for lr in self.base_lrs]
165
166 def step(self, epoch=None):
167 if self.finished:
168 if epoch is None:
169 self.after_scheduler.step(None)
170 self._last_lr = self.after_scheduler.get_last_lr()
171 else:
172 self.after_scheduler.step(epoch - self.warmup_epochs)
173 self._last_lr = self.after_scheduler.get_last_lr()
174 else:
175 return super().step(epoch)
176
[end of colossalai/nn/lr_scheduler/delayed.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/nn/lr_scheduler/delayed.py b/colossalai/nn/lr_scheduler/delayed.py
--- a/colossalai/nn/lr_scheduler/delayed.py
+++ b/colossalai/nn/lr_scheduler/delayed.py
@@ -1,4 +1,10 @@
-from torch.optim.lr_scheduler import _LRScheduler
+import torch
+from packaging.version import Version
+
+if Version(torch.__version__) >= Version("2.0.0"):
+ from torch.optim.lr_scheduler import LRScheduler as _LRScheduler
+else:
+ from torch.optim.lr_scheduler import _LRScheduler
class _enable_get_lr_call:
| {"golden_diff": "diff --git a/colossalai/nn/lr_scheduler/delayed.py b/colossalai/nn/lr_scheduler/delayed.py\n--- a/colossalai/nn/lr_scheduler/delayed.py\n+++ b/colossalai/nn/lr_scheduler/delayed.py\n@@ -1,4 +1,10 @@\n-from torch.optim.lr_scheduler import _LRScheduler\n+import torch\n+from packaging.version import Version\n+\n+if Version(torch.__version__) >= Version(\"2.0.0\"):\n+ from torch.optim.lr_scheduler import LRScheduler as _LRScheduler\n+else:\n+ from torch.optim.lr_scheduler import _LRScheduler\n \n \n class _enable_get_lr_call:\n", "issue": "[BUG]: colossalai 0.3.3 + torch 2.0.1 + baichuan-2 7b \u8bad\u7ec3\u4fdd\u5b58 lr_scheduler \u65f6\u4f1a\u62a5 NotImplementedError \u9519\n### \ud83d\udc1b Describe the bug\r\n\r\n\u7528 colossalai 0.3.3 + torch 2.0.1 + baichuan-2 7b \u8bad\u7ec3\u4fdd\u5b58 lr_scheduler \u65f6 colossalai/nn/lr_scheduler/delayed.py \u4f1a\u62a5 NotImplementedError \u9519\u3002\r\n\r\nIn [25]: lr_scheduler\r\nOut[25]: <[colossalai.nn.lr](http://colossalai.nn.lr/)_scheduler.cosine.CosineAnnealingWarmupLR at 0x7f01cd616e00>\r\nIn [26]: booster.save_lr_scheduler(lr_scheduler, \"/data/checkpoint/lr_scheduler\")\r\n\r\n```\r\n in <module>:1 \r\n \r\n python3.10/site-packages/colossalai/booster/booster.py:308 in \r\n save_lr_scheduler \r\n \r\n 305 lr_scheduler (LRScheduler): A lr scheduler boosted by Booster. \r\n 306 checkpoint (str): Path to the checkpoint. It must be a local file path. \r\n 307 \"\"\" \r\n \u2771 308 self.checkpoint_io.save_lr_scheduler(lr_scheduler, checkpoint) \r\n 309 \r\n 310 def load_lr_scheduler(self, lr_scheduler: LRScheduler, checkpoint: str) -> None: \r\n 311 \"\"\"Load lr scheduler from checkpoint. \r\n \r\n python3.10/site-packages/colossalai/booster/plugin/gemini_plugin.py:225 \r\n in save_lr_scheduler \r\n \r\n 222 Save model to checkpoint but only on master process. \r\n 223 \"\"\" \r\n 224 if self.coordinator.is_master(): \r\n \u2771 225 super().save_lr_scheduler(lr_scheduler, checkpoint) \r\n 226 \r\n 227 \r\n 228 class GeminiPlugin(DPPluginBase): \r\n \r\n python3.10/site-packages/colossalai/checkpoint_io/checkpoint_io_base.py: \r\n 318 in save_lr_scheduler \r\n \r\n 315 lr_scheduler (LRScheduler): lr scheduler to be saved. \r\n 316 checkpoint: checkpoint path. The checkpoint path can only be a file path. \r\n 317 \"\"\" \r\n \u2771 318 torch.save(lr_scheduler.state_dict(), checkpoint) \r\n 319 \r\n 320 def load_lr_scheduler(self, lr_scheduler: LRScheduler, checkpoint: str): \r\n 321 \"\"\" \r\n \r\n python3.10/site-packages/colossalai/nn/lr_scheduler/delayed.py:93 in \r\n state_dict \r\n \r\n 90 state_dict[\"after_scheduler_dict\"] = state_dict[\"after_scheduler\"].state_dic \r\n 91 del state_dict[\"after_scheduler\"] \r\n 92 else: \r\n \u2771 93 raise NotImplementedError() \r\n 94 return state_dict \r\n 95 \r\n 96 def get_lr(self):\r\n```\r\n\r\n\u8fdb\u4e00\u6b65\u5206\u6790 lr_scheduler \u91cc\u7684\u4fe1\u606f\r\n```\r\nstate_dict = {key: value for key, value in lr_scheduler.__dict__.items() if key not in \"optimizer\"}\r\n\r\n# =>\r\n{\r\n 'warmup_epochs': 2000,\r\n 'after_scheduler': <torch.optim.lr_scheduler.CosineAnnealingLR at 0x7f01cd6173a0>,\r\n 'finished': False,\r\n 'base_lrs': [0.0003],\r\n 'last_epoch': 1,\r\n 'verbose': False,\r\n '_step_count': 2,\r\n '_get_lr_called_within_step': False,\r\n '_last_lr': [3e-07]\r\n}\r\n```\r\n\r\n- \u5176\u4e2d after_scheduler \u662f torch.optim.lr_scheduler.CosineAnnealingLR \u7684\u5b9e\u4f8b\uff0c\u800c torch.optim.lr_scheduler.CosineAnnealingLR \u662f\u7ee7\u627f\u7684 LRScheduler\uff0c\u90a3\u4e48 after_scheduler \u7684\u7236\u7c7b\u662f LRScheduler\r\n\r\n- _LRScheduler \u662f\u7ee7\u627f\u4e86 LRScheduler\r\n\r\n- \u800c\u5728 [save lr scheduler \u65f6\uff08delayed.py) \u4e2d](https://github.com/hpcaitech/ColossalAI/blob/822051d8884a46d4d8626330e21adfd6427c99a0/colossalai/nn/lr_scheduler/delayed.py#L88)\uff0c\u662f `isinstance(state_dict['after_scheduler'], _LRScheduler)`\r\n\r\n```\r\nfrom torch.optim.lr_scheduler import _LRScheduler, LRScheduler\r\n\r\nisinstance(state_dict['after_scheduler'], LRScheduler)\r\n\r\n# => True\r\n\r\nisinstance(state_dict['after_scheduler'], _LRScheduler)\r\n\r\n# => False\r\n\r\n```\r\n\r\n**\u90a3\u8fd9\u6837\uff0c\u662f\u5426\u8bf4\u660e \u5e94\u8be5\u7528 `LRScheduler` \u800c\u4e0d\u662f `_LRScheduler` \u5462\uff1f**\r\n\r\n\r\n\u6ce8\uff1abaichuan-2 \u4f9d\u8d56 torch 2.0+\uff0c\u4e0d\u80fd\u964d\u5230 2.0 \u4ee5\u4e0b\uff08\u7528 1.13 \u4f1a\u62a5 TypeError: sdp_kernel() got an unexpected keyword argument 'enable_mem_efficient'\uff09\r\n\r\n### Environment\r\n\r\n- colossalai 0.3.3\r\n- torch 2.0.1\r\n- baichuan-2 7b \n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from torch.optim.lr_scheduler import _LRScheduler\n\n\nclass _enable_get_lr_call:\n def __init__(self, o):\n self.o = o\n\n def __enter__(self):\n self.o._get_lr_called_within_step = True\n return self\n\n def __exit__(self, type, value, traceback):\n self.o._get_lr_called_within_step = False\n\n\nclass DelayerScheduler(_LRScheduler):\n \"\"\"Starts with a flat lr schedule until it reaches N epochs then applies\n the specific scheduler (For example: ReduceLROnPlateau)\n\n Args:\n optimizer (:class:`torch.optim.Optimizer`): Wrapped optimizer.\n delay_epochs (int): Number of epochs to keep the initial lr until starting applying the scheduler.\n after_scheduler (:class:`torch.optim.lr_scheduler`): After target_epoch, use this scheduler.\n last_epoch (int, optional): The index of last epoch, defaults to -1. When last_epoch=-1,\n the schedule is started from the beginning or When last_epoch=-1, sets initial lr as lr.\n \"\"\"\n\n def __init__(self, optimizer, delay_epochs, after_scheduler, last_epoch=-1):\n if delay_epochs < 0:\n raise ValueError(f\"delay_epochs must >= 0, got {delay_epochs}\")\n self.delay_epochs = delay_epochs\n self.after_scheduler = after_scheduler\n self.finished = False\n super().__init__(optimizer, last_epoch)\n\n def state_dict(self):\n state_dict = {key: value for key, value in self.__dict__.items() if key not in \"optimizer\"}\n if isinstance(state_dict[\"after_scheduler\"], _LRScheduler):\n state_dict[\"after_scheduler_type\"] = type(state_dict[\"after_scheduler\"]).__name__\n state_dict[\"after_scheduler_dict\"] = state_dict[\"after_scheduler\"].state_dict()\n del state_dict[\"after_scheduler\"]\n else:\n raise NotImplementedError()\n return state_dict\n\n def get_lr(self):\n if self.last_epoch >= self.delay_epochs:\n if not self.finished:\n self.after_scheduler.base_lrs = self.base_lrs\n self.finished = True\n with _enable_get_lr_call(self.after_scheduler):\n return self.after_scheduler.get_lr()\n\n return self.base_lrs\n\n def step(self, epoch=None):\n if self.finished:\n if epoch is None:\n self.after_scheduler.step(None)\n self._last_lr = self.after_scheduler.get_last_lr()\n else:\n self.after_scheduler.step(epoch - self.delay_epochs)\n self._last_lr = self.after_scheduler.get_last_lr()\n else:\n return super(DelayerScheduler, self).step(epoch)\n\n\nclass WarmupScheduler(_LRScheduler):\n \"\"\"Starts with a linear warmup lr schedule until it reaches N epochs then applies\n the specific scheduler (For example: ReduceLROnPlateau).\n\n Args:\n optimizer (:class:`torch.optim.Optimizer`): Wrapped optimizer.\n warmup_epochs (int): Number of epochs to linearly warmup lr until starting applying the scheduler.\n after_scheduler (:class:`torch.optim.lr_scheduler`): After target_epoch, use this scheduler.\n last_epoch (int, optional): The index of last epoch, defaults to -1. When last_epoch=-1,\n the schedule is started from the beginning or When last_epoch=-1, sets initial lr as lr.\n \"\"\"\n\n def __init__(self, optimizer, warmup_epochs, after_scheduler, last_epoch=-1):\n self.warmup_epochs = int(warmup_epochs)\n self.after_scheduler = after_scheduler\n self.finished = False\n super().__init__(optimizer, last_epoch)\n\n def state_dict(self):\n state_dict = {key: value for key, value in self.__dict__.items() if key not in \"optimizer\"}\n if isinstance(state_dict[\"after_scheduler\"], _LRScheduler):\n state_dict[\"after_scheduler_type\"] = type(state_dict[\"after_scheduler\"]).__name__\n state_dict[\"after_scheduler_dict\"] = state_dict[\"after_scheduler\"].state_dict()\n del state_dict[\"after_scheduler\"]\n else:\n raise NotImplementedError()\n return state_dict\n\n def get_lr(self):\n if self.last_epoch >= self.warmup_epochs:\n if not self.finished:\n self.after_scheduler.base_lrs = self.base_lrs\n self.finished = True\n return self.after_scheduler.get_lr()\n\n return [(self.last_epoch + 1) / self.warmup_epochs * lr for lr in self.base_lrs]\n\n def step(self, epoch=None):\n if self.finished:\n if epoch is None:\n self.after_scheduler.step(None)\n self._last_lr = self.after_scheduler.get_last_lr()\n else:\n self.after_scheduler.step(epoch - self.warmup_epochs)\n self._last_lr = self.after_scheduler.get_last_lr()\n else:\n return super().step(epoch)\n\n\nclass WarmupDelayerScheduler(_LRScheduler):\n \"\"\"Starts with a linear warmup lr schedule until it reaches N epochs and a flat lr schedule\n until it reaches M epochs then applies the specific scheduler (For example: ReduceLROnPlateau).\n\n Args:\n optimizer (:class:`torch.optim.Optimizer`): Wrapped optimizer.\n warmup_epochs (int): Number of epochs to linearly warmup lr until starting applying the scheduler.\n delay_epochs (int): Number of epochs to keep the initial lr until starting applying the scheduler.\n after_scheduler (:class:`torch.optim.lr_scheduler`): After target_epoch, use this scheduler.\n last_epoch (int, optional): The index of last epoch, defaults to -1. When last_epoch=-1,\n the schedule is started from the beginning or When last_epoch=-1, sets initial lr as lr.\n \"\"\"\n\n def __init__(self, optimizer, warmup_epochs, delay_epochs, after_scheduler, last_epoch=-1):\n if delay_epochs < 0:\n raise ValueError(f\"delay_epochs must >= 0, got {delay_epochs}\")\n if warmup_epochs < 0:\n raise ValueError(f\"warmup_epochs must >= 0, got {warmup_epochs}\")\n self.warmup_epochs = warmup_epochs\n self.delay_epochs = delay_epochs\n self.after_scheduler = after_scheduler\n self.finished = False\n super().__init__(optimizer, last_epoch)\n\n def state_dict(self):\n state_dict = {key: value for key, value in self.__dict__.items() if key not in \"optimizer\"}\n if isinstance(state_dict[\"after_scheduler\"], _LRScheduler):\n state_dict[\"after_scheduler_type\"] = type(state_dict[\"after_scheduler\"]).__name__\n state_dict[\"after_scheduler_dict\"] = state_dict[\"after_scheduler\"].state_dict()\n del state_dict[\"after_scheduler\"]\n else:\n raise NotImplementedError()\n return state_dict\n\n def get_lr(self):\n if self.last_epoch >= self.warmup_epochs + self.delay_epochs:\n if not self.finished:\n self.after_scheduler.base_lrs = self.base_lrs\n # reset lr to base_lr\n for group, base_lr in zip(self.optimizer.param_groups, self.base_lrs):\n group[\"lr\"] = base_lr\n self.finished = True\n with _enable_get_lr_call(self.after_scheduler):\n return self.after_scheduler.get_lr()\n elif self.last_epoch >= self.warmup_epochs:\n return self.base_lrs\n\n return [(self.last_epoch + 1) / self.warmup_epochs * lr for lr in self.base_lrs]\n\n def step(self, epoch=None):\n if self.finished:\n if epoch is None:\n self.after_scheduler.step(None)\n self._last_lr = self.after_scheduler.get_last_lr()\n else:\n self.after_scheduler.step(epoch - self.warmup_epochs)\n self._last_lr = self.after_scheduler.get_last_lr()\n else:\n return super().step(epoch)\n", "path": "colossalai/nn/lr_scheduler/delayed.py"}]} | 3,887 | 155 |
gh_patches_debug_35227 | rasdani/github-patches | git_diff | openshift__openshift-ansible-3055 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
openshift_image_tag=latest broken again
https://github.com/openshift/openshift-ansible/pull/2882 allowed for `openshift_image_tag=latest`.
I think is was broken shortly thereafter with https://github.com/openshift/openshift-ansible/pull/2855 afaict
```
deployment_type=origin
openshift_image_tag=latest
```
```
TASK [openshift_master_facts : set_fact] ***************************************
fatal: [origin-master.local.variantweb.net]: FAILED! => {"failed": true, "msg": "Unknown short_version atest"}
```
Looks like a code path is assuming the first character of the image tag is 'v' and removing it.
</issue>
<code>
[start of roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py]
1 # pylint: disable=missing-docstring
2
3 import re
4 from ansible.errors import AnsibleError
5 from ansible.plugins.lookup import LookupBase
6
7
8 class LookupModule(LookupBase):
9 # pylint: disable=too-many-branches,too-many-statements,too-many-arguments
10
11 def run(self, terms, variables=None, zones_enabled=True, short_version=None,
12 deployment_type=None, **kwargs):
13
14 priorities = [
15 {'name': 'LeastRequestedPriority', 'weight': 1},
16 {'name': 'BalancedResourceAllocation', 'weight': 1},
17 {'name': 'SelectorSpreadPriority', 'weight': 1}
18 ]
19
20 if short_version is None or deployment_type is None:
21 if 'openshift' not in variables:
22 raise AnsibleError("This lookup module requires openshift_facts to be run prior to use")
23
24 if deployment_type is None:
25 if 'common' not in variables['openshift'] or 'deployment_type' not in variables['openshift']['common']:
26 raise AnsibleError("This lookup module requires that the deployment_type be set")
27
28 deployment_type = variables['openshift']['common']['deployment_type']
29
30 if short_version is None:
31 if 'short_version' in variables['openshift']['common']:
32 short_version = variables['openshift']['common']['short_version']
33 elif 'openshift_release' in variables:
34 release = variables['openshift_release']
35 if release.startswith('v'):
36 short_version = release[1:]
37 else:
38 short_version = release
39 short_version = '.'.join(short_version.split('.')[0:2])
40 elif 'openshift_version' in variables:
41 version = variables['openshift_version']
42 short_version = '.'.join(version.split('.')[0:2])
43 else:
44 # pylint: disable=line-too-long
45 raise AnsibleError("Either OpenShift needs to be installed or openshift_release needs to be specified")
46
47 if deployment_type == 'origin':
48 if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:
49 raise AnsibleError("Unknown short_version %s" % short_version)
50 elif deployment_type == 'openshift-enterprise':
51 if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:
52 raise AnsibleError("Unknown short_version %s" % short_version)
53 else:
54 raise AnsibleError("Unknown deployment_type %s" % deployment_type)
55
56 if deployment_type == 'openshift-enterprise':
57 # convert short_version to origin short_version
58 short_version = re.sub('^3.', '1.', short_version)
59
60 if short_version == '1.4':
61 priorities.append({'name': 'NodePreferAvoidPodsPriority', 'weight': 10000})
62
63 # only 1.1 didn't include NodeAffinityPriority
64 if short_version != '1.1':
65 priorities.append({'name': 'NodeAffinityPriority', 'weight': 1})
66
67 if short_version not in ['1.1', '1.2']:
68 priorities.append({'name': 'TaintTolerationPriority', 'weight': 1})
69
70 if short_version not in ['1.1', '1.2', '1.3']:
71 priorities.append({'name': 'InterPodAffinityPriority', 'weight': 1})
72
73 if zones_enabled:
74 zone_priority = {
75 'name': 'Zone',
76 'argument': {
77 'serviceAntiAffinity': {
78 'label': 'zone'
79 }
80 },
81 'weight': 2
82 }
83 priorities.append(zone_priority)
84
85 return priorities
86
[end of roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py]
[start of roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py]
1 # pylint: disable=missing-docstring
2
3 import re
4 from ansible.errors import AnsibleError
5 from ansible.plugins.lookup import LookupBase
6
7
8 class LookupModule(LookupBase):
9 # pylint: disable=too-many-branches,too-many-statements,too-many-arguments
10
11 def run(self, terms, variables=None, regions_enabled=True, short_version=None,
12 deployment_type=None, **kwargs):
13
14 predicates = []
15
16 if short_version is None or deployment_type is None:
17 if 'openshift' not in variables:
18 raise AnsibleError("This lookup module requires openshift_facts to be run prior to use")
19
20 if deployment_type is None:
21 if 'common' not in variables['openshift'] or 'deployment_type' not in variables['openshift']['common']:
22 raise AnsibleError("This lookup module requires that the deployment_type be set")
23
24 deployment_type = variables['openshift']['common']['deployment_type']
25
26 if short_version is None:
27 if 'short_version' in variables['openshift']['common']:
28 short_version = variables['openshift']['common']['short_version']
29 elif 'openshift_release' in variables:
30 release = variables['openshift_release']
31 if release.startswith('v'):
32 short_version = release[1:]
33 else:
34 short_version = release
35 short_version = '.'.join(short_version.split('.')[0:2])
36 elif 'openshift_version' in variables:
37 version = variables['openshift_version']
38 short_version = '.'.join(version.split('.')[0:2])
39 else:
40 # pylint: disable=line-too-long
41 raise AnsibleError("Either OpenShift needs to be installed or openshift_release needs to be specified")
42 if deployment_type == 'origin':
43 if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:
44 raise AnsibleError("Unknown short_version %s" % short_version)
45 elif deployment_type == 'openshift-enterprise':
46 if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:
47 raise AnsibleError("Unknown short_version %s" % short_version)
48 else:
49 raise AnsibleError("Unknown deployment_type %s" % deployment_type)
50
51 if deployment_type == 'openshift-enterprise':
52 # convert short_version to origin short_version
53 short_version = re.sub('^3.', '1.', short_version)
54
55 if short_version in ['1.1', '1.2']:
56 predicates.append({'name': 'PodFitsHostPorts'})
57 predicates.append({'name': 'PodFitsResources'})
58
59 # applies to all known versions
60 predicates.append({'name': 'NoDiskConflict'})
61
62 # only 1.1 didn't include NoVolumeZoneConflict
63 if short_version != '1.1':
64 predicates.append({'name': 'NoVolumeZoneConflict'})
65
66 if short_version in ['1.1', '1.2']:
67 predicates.append({'name': 'MatchNodeSelector'})
68
69 if short_version != '1.1':
70 predicates.append({'name': 'MaxEBSVolumeCount'})
71 predicates.append({'name': 'MaxGCEPDVolumeCount'})
72
73 if short_version not in ['1.1', '1.2']:
74 predicates.append({'name': 'GeneralPredicates'})
75 predicates.append({'name': 'PodToleratesNodeTaints'})
76 predicates.append({'name': 'CheckNodeMemoryPressure'})
77
78 if short_version not in ['1.1', '1.2', '1.3']:
79 predicates.append({'name': 'CheckNodeDiskPressure'})
80 predicates.append({'name': 'MatchInterPodAffinity'})
81
82 if regions_enabled:
83 region_predicate = {
84 'name': 'Region',
85 'argument': {
86 'serviceAffinity': {
87 'labels': ['region']
88 }
89 }
90 }
91 predicates.append(region_predicate)
92
93 return predicates
94
[end of roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py
--- a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py
+++ b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py
@@ -40,10 +40,10 @@
# pylint: disable=line-too-long
raise AnsibleError("Either OpenShift needs to be installed or openshift_release needs to be specified")
if deployment_type == 'origin':
- if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:
+ if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6', 'latest']:
raise AnsibleError("Unknown short_version %s" % short_version)
elif deployment_type == 'openshift-enterprise':
- if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:
+ if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6', 'latest']:
raise AnsibleError("Unknown short_version %s" % short_version)
else:
raise AnsibleError("Unknown deployment_type %s" % deployment_type)
diff --git a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py
--- a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py
+++ b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py
@@ -45,10 +45,10 @@
raise AnsibleError("Either OpenShift needs to be installed or openshift_release needs to be specified")
if deployment_type == 'origin':
- if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:
+ if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6', 'latest']:
raise AnsibleError("Unknown short_version %s" % short_version)
elif deployment_type == 'openshift-enterprise':
- if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:
+ if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6', 'latest']:
raise AnsibleError("Unknown short_version %s" % short_version)
else:
raise AnsibleError("Unknown deployment_type %s" % deployment_type)
| {"golden_diff": "diff --git a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py\n--- a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py\n+++ b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py\n@@ -40,10 +40,10 @@\n # pylint: disable=line-too-long\n raise AnsibleError(\"Either OpenShift needs to be installed or openshift_release needs to be specified\")\n if deployment_type == 'origin':\n- if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:\n+ if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6', 'latest']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n elif deployment_type == 'openshift-enterprise':\n- if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:\n+ if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6', 'latest']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n else:\n raise AnsibleError(\"Unknown deployment_type %s\" % deployment_type)\ndiff --git a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py\n--- a/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py\n+++ b/roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py\n@@ -45,10 +45,10 @@\n raise AnsibleError(\"Either OpenShift needs to be installed or openshift_release needs to be specified\")\n \n if deployment_type == 'origin':\n- if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:\n+ if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6', 'latest']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n elif deployment_type == 'openshift-enterprise':\n- if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:\n+ if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6', 'latest']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n else:\n raise AnsibleError(\"Unknown deployment_type %s\" % deployment_type)\n", "issue": "openshift_image_tag=latest broken again\nhttps://github.com/openshift/openshift-ansible/pull/2882 allowed for `openshift_image_tag=latest`.\r\nI think is was broken shortly thereafter with https://github.com/openshift/openshift-ansible/pull/2855 afaict\r\n\r\n```\r\ndeployment_type=origin\r\nopenshift_image_tag=latest\r\n```\r\n\r\n```\r\nTASK [openshift_master_facts : set_fact] ***************************************\r\nfatal: [origin-master.local.variantweb.net]: FAILED! => {\"failed\": true, \"msg\": \"Unknown short_version atest\"}\r\n```\r\n\r\nLooks like a code path is assuming the first character of the image tag is 'v' and removing it.\n", "before_files": [{"content": "# pylint: disable=missing-docstring\n\nimport re\nfrom ansible.errors import AnsibleError\nfrom ansible.plugins.lookup import LookupBase\n\n\nclass LookupModule(LookupBase):\n # pylint: disable=too-many-branches,too-many-statements,too-many-arguments\n\n def run(self, terms, variables=None, zones_enabled=True, short_version=None,\n deployment_type=None, **kwargs):\n\n priorities = [\n {'name': 'LeastRequestedPriority', 'weight': 1},\n {'name': 'BalancedResourceAllocation', 'weight': 1},\n {'name': 'SelectorSpreadPriority', 'weight': 1}\n ]\n\n if short_version is None or deployment_type is None:\n if 'openshift' not in variables:\n raise AnsibleError(\"This lookup module requires openshift_facts to be run prior to use\")\n\n if deployment_type is None:\n if 'common' not in variables['openshift'] or 'deployment_type' not in variables['openshift']['common']:\n raise AnsibleError(\"This lookup module requires that the deployment_type be set\")\n\n deployment_type = variables['openshift']['common']['deployment_type']\n\n if short_version is None:\n if 'short_version' in variables['openshift']['common']:\n short_version = variables['openshift']['common']['short_version']\n elif 'openshift_release' in variables:\n release = variables['openshift_release']\n if release.startswith('v'):\n short_version = release[1:]\n else:\n short_version = release\n short_version = '.'.join(short_version.split('.')[0:2])\n elif 'openshift_version' in variables:\n version = variables['openshift_version']\n short_version = '.'.join(version.split('.')[0:2])\n else:\n # pylint: disable=line-too-long\n raise AnsibleError(\"Either OpenShift needs to be installed or openshift_release needs to be specified\")\n\n if deployment_type == 'origin':\n if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n elif deployment_type == 'openshift-enterprise':\n if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n else:\n raise AnsibleError(\"Unknown deployment_type %s\" % deployment_type)\n\n if deployment_type == 'openshift-enterprise':\n # convert short_version to origin short_version\n short_version = re.sub('^3.', '1.', short_version)\n\n if short_version == '1.4':\n priorities.append({'name': 'NodePreferAvoidPodsPriority', 'weight': 10000})\n\n # only 1.1 didn't include NodeAffinityPriority\n if short_version != '1.1':\n priorities.append({'name': 'NodeAffinityPriority', 'weight': 1})\n\n if short_version not in ['1.1', '1.2']:\n priorities.append({'name': 'TaintTolerationPriority', 'weight': 1})\n\n if short_version not in ['1.1', '1.2', '1.3']:\n priorities.append({'name': 'InterPodAffinityPriority', 'weight': 1})\n\n if zones_enabled:\n zone_priority = {\n 'name': 'Zone',\n 'argument': {\n 'serviceAntiAffinity': {\n 'label': 'zone'\n }\n },\n 'weight': 2\n }\n priorities.append(zone_priority)\n\n return priorities\n", "path": "roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_priorities.py"}, {"content": "# pylint: disable=missing-docstring\n\nimport re\nfrom ansible.errors import AnsibleError\nfrom ansible.plugins.lookup import LookupBase\n\n\nclass LookupModule(LookupBase):\n # pylint: disable=too-many-branches,too-many-statements,too-many-arguments\n\n def run(self, terms, variables=None, regions_enabled=True, short_version=None,\n deployment_type=None, **kwargs):\n\n predicates = []\n\n if short_version is None or deployment_type is None:\n if 'openshift' not in variables:\n raise AnsibleError(\"This lookup module requires openshift_facts to be run prior to use\")\n\n if deployment_type is None:\n if 'common' not in variables['openshift'] or 'deployment_type' not in variables['openshift']['common']:\n raise AnsibleError(\"This lookup module requires that the deployment_type be set\")\n\n deployment_type = variables['openshift']['common']['deployment_type']\n\n if short_version is None:\n if 'short_version' in variables['openshift']['common']:\n short_version = variables['openshift']['common']['short_version']\n elif 'openshift_release' in variables:\n release = variables['openshift_release']\n if release.startswith('v'):\n short_version = release[1:]\n else:\n short_version = release\n short_version = '.'.join(short_version.split('.')[0:2])\n elif 'openshift_version' in variables:\n version = variables['openshift_version']\n short_version = '.'.join(version.split('.')[0:2])\n else:\n # pylint: disable=line-too-long\n raise AnsibleError(\"Either OpenShift needs to be installed or openshift_release needs to be specified\")\n if deployment_type == 'origin':\n if short_version not in ['1.1', '1.2', '1.3', '1.4', '1.5', '1.6']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n elif deployment_type == 'openshift-enterprise':\n if short_version not in ['3.1', '3.2', '3.3', '3.4', '3.5', '3.6']:\n raise AnsibleError(\"Unknown short_version %s\" % short_version)\n else:\n raise AnsibleError(\"Unknown deployment_type %s\" % deployment_type)\n\n if deployment_type == 'openshift-enterprise':\n # convert short_version to origin short_version\n short_version = re.sub('^3.', '1.', short_version)\n\n if short_version in ['1.1', '1.2']:\n predicates.append({'name': 'PodFitsHostPorts'})\n predicates.append({'name': 'PodFitsResources'})\n\n # applies to all known versions\n predicates.append({'name': 'NoDiskConflict'})\n\n # only 1.1 didn't include NoVolumeZoneConflict\n if short_version != '1.1':\n predicates.append({'name': 'NoVolumeZoneConflict'})\n\n if short_version in ['1.1', '1.2']:\n predicates.append({'name': 'MatchNodeSelector'})\n\n if short_version != '1.1':\n predicates.append({'name': 'MaxEBSVolumeCount'})\n predicates.append({'name': 'MaxGCEPDVolumeCount'})\n\n if short_version not in ['1.1', '1.2']:\n predicates.append({'name': 'GeneralPredicates'})\n predicates.append({'name': 'PodToleratesNodeTaints'})\n predicates.append({'name': 'CheckNodeMemoryPressure'})\n\n if short_version not in ['1.1', '1.2', '1.3']:\n predicates.append({'name': 'CheckNodeDiskPressure'})\n predicates.append({'name': 'MatchInterPodAffinity'})\n\n if regions_enabled:\n region_predicate = {\n 'name': 'Region',\n 'argument': {\n 'serviceAffinity': {\n 'labels': ['region']\n }\n }\n }\n predicates.append(region_predicate)\n\n return predicates\n", "path": "roles/openshift_master_facts/lookup_plugins/openshift_master_facts_default_predicates.py"}]} | 2,782 | 722 |
gh_patches_debug_13132 | rasdani/github-patches | git_diff | conan-io__conan-14185 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] Can't call `conan upload --recipe-only` twice with backup sources enabled
### Steps to reproduce
1. Enable backup sources
2. Export a recipe that downloads file
3. Call conan upload only recipe for ref
4. Do it again, it fails due to KeyError
Found while prepping for https://github.com/conan-io/conan-center-index/pull/18082
</issue>
<code>
[start of conans/client/downloaders/download_cache.py]
1 import json
2 import os
3 from contextlib import contextmanager
4 from threading import Lock
5
6 from conans.util.dates import timestamp_now
7 from conans.util.files import load, save
8 from conans.util.locks import SimpleLock
9 from conans.util.sha import sha256 as compute_sha256
10
11
12 class DownloadCache:
13 """ The download cache has 3 folders
14 - "s": SOURCE_BACKUP for the files.download(internet_url) backup sources feature
15 - "c": CONAN_CACHE: for caching Conan packages artifacts
16 - "locks": The LOCKS folder containing the file locks for concurrent access to the cache
17 """
18 _LOCKS = "locks"
19 _SOURCE_BACKUP = "s"
20 _CONAN_CACHE = "c"
21
22 def __init__(self, path: str):
23 self._path: str = path
24
25 def source_path(self, sha256):
26 return os.path.join(self._path, self._SOURCE_BACKUP, sha256)
27
28 def cached_path(self, url):
29 h = compute_sha256(url.encode())
30 return os.path.join(self._path, self._CONAN_CACHE, h), h
31
32 _thread_locks = {} # Needs to be shared among all instances
33
34 @contextmanager
35 def lock(self, lock_id):
36 lock = os.path.join(self._path, self._LOCKS, lock_id)
37 with SimpleLock(lock):
38 # Once the process has access, make sure multithread is locked too
39 # as SimpleLock doesn't work multithread
40 thread_lock = self._thread_locks.setdefault(lock, Lock())
41 thread_lock.acquire()
42 try:
43 yield
44 finally:
45 thread_lock.release()
46
47 def get_backup_sources_files_to_upload(self, package_list, excluded_urls):
48 """ from a package_list of packages to upload, collect from the backup-sources cache
49 the matching references to upload those backups too
50 """
51 def should_upload_sources(package):
52 return any(prev["upload"] for prev in package["revisions"].values())
53
54 files_to_upload = []
55 path_backups = os.path.join(self._path, self._SOURCE_BACKUP)
56
57 if not os.path.exists(path_backups):
58 return []
59
60 if excluded_urls is None:
61 excluded_urls = []
62
63 all_refs = {str(k) for k, ref in package_list.refs()
64 if ref.get("upload") or any(should_upload_sources(p)
65 for p in ref["packages"].values())}
66 for f in os.listdir(path_backups):
67 if f.endswith(".json"):
68 f = os.path.join(path_backups, f)
69 content = json.loads(load(f))
70 refs = content["references"]
71 # unknown entries are not uploaded at this moment, the flow is not expected.
72 for ref, urls in refs.items():
73 is_excluded = all(any(url.startswith(excluded_url)
74 for excluded_url in excluded_urls)
75 for url in urls)
76 if not is_excluded and ref in all_refs:
77 files_to_upload.append(f)
78 files_to_upload.append(f[:-5])
79 break
80 return files_to_upload
81
82 @staticmethod
83 def update_backup_sources_json(cached_path, conanfile, urls):
84 """ create or update the sha256.json file with the references and new urls used
85 """
86 summary_path = cached_path + ".json"
87 if os.path.exists(summary_path):
88 summary = json.loads(load(summary_path))
89 else:
90 summary = {"references": {}, "timestamp": timestamp_now()}
91
92 try:
93 summary_key = str(conanfile.ref)
94 except AttributeError:
95 # The recipe path would be different between machines
96 # So best we can do is to set this as unknown
97 summary_key = "unknown"
98
99 if not isinstance(urls, (list, tuple)):
100 urls = [urls]
101 existing_urls = summary["references"].setdefault(summary_key, [])
102 existing_urls.extend(url for url in urls if url not in existing_urls)
103 conanfile.output.verbose(f"Updating ${summary_path} summary file")
104 summary_dump = json.dumps(summary)
105 conanfile.output.debug(f"New summary: ${summary_dump}")
106 save(summary_path, json.dumps(summary))
107
[end of conans/client/downloaders/download_cache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/downloaders/download_cache.py b/conans/client/downloaders/download_cache.py
--- a/conans/client/downloaders/download_cache.py
+++ b/conans/client/downloaders/download_cache.py
@@ -60,9 +60,12 @@
if excluded_urls is None:
excluded_urls = []
- all_refs = {str(k) for k, ref in package_list.refs()
- if ref.get("upload") or any(should_upload_sources(p)
- for p in ref["packages"].values())}
+ all_refs = set()
+ for k, ref in package_list.refs():
+ packages = ref.get("packages", {}).values()
+ if ref.get("upload") or any(should_upload_sources(p) for p in packages):
+ all_refs.add(str(k))
+
for f in os.listdir(path_backups):
if f.endswith(".json"):
f = os.path.join(path_backups, f)
| {"golden_diff": "diff --git a/conans/client/downloaders/download_cache.py b/conans/client/downloaders/download_cache.py\n--- a/conans/client/downloaders/download_cache.py\n+++ b/conans/client/downloaders/download_cache.py\n@@ -60,9 +60,12 @@\n if excluded_urls is None:\n excluded_urls = []\n \n- all_refs = {str(k) for k, ref in package_list.refs()\n- if ref.get(\"upload\") or any(should_upload_sources(p)\n- for p in ref[\"packages\"].values())}\n+ all_refs = set()\n+ for k, ref in package_list.refs():\n+ packages = ref.get(\"packages\", {}).values()\n+ if ref.get(\"upload\") or any(should_upload_sources(p) for p in packages):\n+ all_refs.add(str(k))\n+\n for f in os.listdir(path_backups):\n if f.endswith(\".json\"):\n f = os.path.join(path_backups, f)\n", "issue": "[bug] Can't call `conan upload --recipe-only` twice with backup sources enabled\n### Steps to reproduce\r\n\r\n1. Enable backup sources\r\n2. Export a recipe that downloads file\r\n3. Call conan upload only recipe for ref\r\n4. Do it again, it fails due to KeyError\r\n\r\n\r\nFound while prepping for https://github.com/conan-io/conan-center-index/pull/18082\n", "before_files": [{"content": "import json\nimport os\nfrom contextlib import contextmanager\nfrom threading import Lock\n\nfrom conans.util.dates import timestamp_now\nfrom conans.util.files import load, save\nfrom conans.util.locks import SimpleLock\nfrom conans.util.sha import sha256 as compute_sha256\n\n\nclass DownloadCache:\n \"\"\" The download cache has 3 folders\n - \"s\": SOURCE_BACKUP for the files.download(internet_url) backup sources feature\n - \"c\": CONAN_CACHE: for caching Conan packages artifacts\n - \"locks\": The LOCKS folder containing the file locks for concurrent access to the cache\n \"\"\"\n _LOCKS = \"locks\"\n _SOURCE_BACKUP = \"s\"\n _CONAN_CACHE = \"c\"\n\n def __init__(self, path: str):\n self._path: str = path\n\n def source_path(self, sha256):\n return os.path.join(self._path, self._SOURCE_BACKUP, sha256)\n\n def cached_path(self, url):\n h = compute_sha256(url.encode())\n return os.path.join(self._path, self._CONAN_CACHE, h), h\n\n _thread_locks = {} # Needs to be shared among all instances\n\n @contextmanager\n def lock(self, lock_id):\n lock = os.path.join(self._path, self._LOCKS, lock_id)\n with SimpleLock(lock):\n # Once the process has access, make sure multithread is locked too\n # as SimpleLock doesn't work multithread\n thread_lock = self._thread_locks.setdefault(lock, Lock())\n thread_lock.acquire()\n try:\n yield\n finally:\n thread_lock.release()\n\n def get_backup_sources_files_to_upload(self, package_list, excluded_urls):\n \"\"\" from a package_list of packages to upload, collect from the backup-sources cache\n the matching references to upload those backups too\n \"\"\"\n def should_upload_sources(package):\n return any(prev[\"upload\"] for prev in package[\"revisions\"].values())\n\n files_to_upload = []\n path_backups = os.path.join(self._path, self._SOURCE_BACKUP)\n\n if not os.path.exists(path_backups):\n return []\n\n if excluded_urls is None:\n excluded_urls = []\n\n all_refs = {str(k) for k, ref in package_list.refs()\n if ref.get(\"upload\") or any(should_upload_sources(p)\n for p in ref[\"packages\"].values())}\n for f in os.listdir(path_backups):\n if f.endswith(\".json\"):\n f = os.path.join(path_backups, f)\n content = json.loads(load(f))\n refs = content[\"references\"]\n # unknown entries are not uploaded at this moment, the flow is not expected.\n for ref, urls in refs.items():\n is_excluded = all(any(url.startswith(excluded_url)\n for excluded_url in excluded_urls)\n for url in urls)\n if not is_excluded and ref in all_refs:\n files_to_upload.append(f)\n files_to_upload.append(f[:-5])\n break\n return files_to_upload\n\n @staticmethod\n def update_backup_sources_json(cached_path, conanfile, urls):\n \"\"\" create or update the sha256.json file with the references and new urls used\n \"\"\"\n summary_path = cached_path + \".json\"\n if os.path.exists(summary_path):\n summary = json.loads(load(summary_path))\n else:\n summary = {\"references\": {}, \"timestamp\": timestamp_now()}\n\n try:\n summary_key = str(conanfile.ref)\n except AttributeError:\n # The recipe path would be different between machines\n # So best we can do is to set this as unknown\n summary_key = \"unknown\"\n\n if not isinstance(urls, (list, tuple)):\n urls = [urls]\n existing_urls = summary[\"references\"].setdefault(summary_key, [])\n existing_urls.extend(url for url in urls if url not in existing_urls)\n conanfile.output.verbose(f\"Updating ${summary_path} summary file\")\n summary_dump = json.dumps(summary)\n conanfile.output.debug(f\"New summary: ${summary_dump}\")\n save(summary_path, json.dumps(summary))\n", "path": "conans/client/downloaders/download_cache.py"}]} | 1,749 | 205 |
gh_patches_debug_8952 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-1413 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
utcnow is deprecated in python 3.12
#### Environment details
- OS: Linux
- Python version: 3.12.0
- pip version: 23.2.1
- `google-auth` version: 2.9.1
#### Issue
Here is the related code
https://github.com/googleapis/google-auth-library-python/blob/d2ab3afdb567850121fec7de1d86fb5fb0fa80ed/google/auth/_helpers.py#L89-L95
</issue>
<code>
[start of google/auth/_helpers.py]
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Helper functions for commonly used utilities."""
16
17 import base64
18 import calendar
19 import datetime
20 from email.message import Message
21 import sys
22 import urllib
23
24 from google.auth import exceptions
25
26 # Token server doesn't provide a new a token when doing refresh unless the
27 # token is expiring within 30 seconds, so refresh threshold should not be
28 # more than 30 seconds. Otherwise auth lib will send tons of refresh requests
29 # until 30 seconds before the expiration, and cause a spike of CPU usage.
30 REFRESH_THRESHOLD = datetime.timedelta(seconds=20)
31
32
33 def copy_docstring(source_class):
34 """Decorator that copies a method's docstring from another class.
35
36 Args:
37 source_class (type): The class that has the documented method.
38
39 Returns:
40 Callable: A decorator that will copy the docstring of the same
41 named method in the source class to the decorated method.
42 """
43
44 def decorator(method):
45 """Decorator implementation.
46
47 Args:
48 method (Callable): The method to copy the docstring to.
49
50 Returns:
51 Callable: the same method passed in with an updated docstring.
52
53 Raises:
54 google.auth.exceptions.InvalidOperation: if the method already has a docstring.
55 """
56 if method.__doc__:
57 raise exceptions.InvalidOperation("Method already has a docstring.")
58
59 source_method = getattr(source_class, method.__name__)
60 method.__doc__ = source_method.__doc__
61
62 return method
63
64 return decorator
65
66
67 def parse_content_type(header_value):
68 """Parse a 'content-type' header value to get just the plain media-type (without parameters).
69
70 This is done using the class Message from email.message as suggested in PEP 594
71 (because the cgi is now deprecated and will be removed in python 3.13,
72 see https://peps.python.org/pep-0594/#cgi).
73
74 Args:
75 header_value (str): The value of a 'content-type' header as a string.
76
77 Returns:
78 str: A string with just the lowercase media-type from the parsed 'content-type' header.
79 If the provided content-type is not parsable, returns 'text/plain',
80 the default value for textual files.
81 """
82 m = Message()
83 m["content-type"] = header_value
84 return (
85 m.get_content_type()
86 ) # Despite the name, actually returns just the media-type
87
88
89 def utcnow():
90 """Returns the current UTC datetime.
91
92 Returns:
93 datetime: The current time in UTC.
94 """
95 return datetime.datetime.utcnow()
96
97
98 def datetime_to_secs(value):
99 """Convert a datetime object to the number of seconds since the UNIX epoch.
100
101 Args:
102 value (datetime): The datetime to convert.
103
104 Returns:
105 int: The number of seconds since the UNIX epoch.
106 """
107 return calendar.timegm(value.utctimetuple())
108
109
110 def to_bytes(value, encoding="utf-8"):
111 """Converts a string value to bytes, if necessary.
112
113 Args:
114 value (Union[str, bytes]): The value to be converted.
115 encoding (str): The encoding to use to convert unicode to bytes.
116 Defaults to "utf-8".
117
118 Returns:
119 bytes: The original value converted to bytes (if unicode) or as
120 passed in if it started out as bytes.
121
122 Raises:
123 google.auth.exceptions.InvalidValue: If the value could not be converted to bytes.
124 """
125 result = value.encode(encoding) if isinstance(value, str) else value
126 if isinstance(result, bytes):
127 return result
128 else:
129 raise exceptions.InvalidValue(
130 "{0!r} could not be converted to bytes".format(value)
131 )
132
133
134 def from_bytes(value):
135 """Converts bytes to a string value, if necessary.
136
137 Args:
138 value (Union[str, bytes]): The value to be converted.
139
140 Returns:
141 str: The original value converted to unicode (if bytes) or as passed in
142 if it started out as unicode.
143
144 Raises:
145 google.auth.exceptions.InvalidValue: If the value could not be converted to unicode.
146 """
147 result = value.decode("utf-8") if isinstance(value, bytes) else value
148 if isinstance(result, str):
149 return result
150 else:
151 raise exceptions.InvalidValue(
152 "{0!r} could not be converted to unicode".format(value)
153 )
154
155
156 def update_query(url, params, remove=None):
157 """Updates a URL's query parameters.
158
159 Replaces any current values if they are already present in the URL.
160
161 Args:
162 url (str): The URL to update.
163 params (Mapping[str, str]): A mapping of query parameter
164 keys to values.
165 remove (Sequence[str]): Parameters to remove from the query string.
166
167 Returns:
168 str: The URL with updated query parameters.
169
170 Examples:
171
172 >>> url = 'http://example.com?a=1'
173 >>> update_query(url, {'a': '2'})
174 http://example.com?a=2
175 >>> update_query(url, {'b': '3'})
176 http://example.com?a=1&b=3
177 >> update_query(url, {'b': '3'}, remove=['a'])
178 http://example.com?b=3
179
180 """
181 if remove is None:
182 remove = []
183
184 # Split the URL into parts.
185 parts = urllib.parse.urlparse(url)
186 # Parse the query string.
187 query_params = urllib.parse.parse_qs(parts.query)
188 # Update the query parameters with the new parameters.
189 query_params.update(params)
190 # Remove any values specified in remove.
191 query_params = {
192 key: value for key, value in query_params.items() if key not in remove
193 }
194 # Re-encoded the query string.
195 new_query = urllib.parse.urlencode(query_params, doseq=True)
196 # Unsplit the url.
197 new_parts = parts._replace(query=new_query)
198 return urllib.parse.urlunparse(new_parts)
199
200
201 def scopes_to_string(scopes):
202 """Converts scope value to a string suitable for sending to OAuth 2.0
203 authorization servers.
204
205 Args:
206 scopes (Sequence[str]): The sequence of scopes to convert.
207
208 Returns:
209 str: The scopes formatted as a single string.
210 """
211 return " ".join(scopes)
212
213
214 def string_to_scopes(scopes):
215 """Converts stringifed scopes value to a list.
216
217 Args:
218 scopes (Union[Sequence, str]): The string of space-separated scopes
219 to convert.
220 Returns:
221 Sequence(str): The separated scopes.
222 """
223 if not scopes:
224 return []
225
226 return scopes.split(" ")
227
228
229 def padded_urlsafe_b64decode(value):
230 """Decodes base64 strings lacking padding characters.
231
232 Google infrastructure tends to omit the base64 padding characters.
233
234 Args:
235 value (Union[str, bytes]): The encoded value.
236
237 Returns:
238 bytes: The decoded value
239 """
240 b64string = to_bytes(value)
241 padded = b64string + b"=" * (-len(b64string) % 4)
242 return base64.urlsafe_b64decode(padded)
243
244
245 def unpadded_urlsafe_b64encode(value):
246 """Encodes base64 strings removing any padding characters.
247
248 `rfc 7515`_ defines Base64url to NOT include any padding
249 characters, but the stdlib doesn't do that by default.
250
251 _rfc7515: https://tools.ietf.org/html/rfc7515#page-6
252
253 Args:
254 value (Union[str|bytes]): The bytes-like value to encode
255
256 Returns:
257 Union[str|bytes]: The encoded value
258 """
259 return base64.urlsafe_b64encode(value).rstrip(b"=")
260
261
262 def is_python_3():
263 """Check if the Python interpreter is Python 2 or 3.
264
265 Returns:
266 bool: True if the Python interpreter is Python 3 and False otherwise.
267 """
268 return sys.version_info > (3, 0)
269
[end of google/auth/_helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/google/auth/_helpers.py b/google/auth/_helpers.py
--- a/google/auth/_helpers.py
+++ b/google/auth/_helpers.py
@@ -92,7 +92,14 @@
Returns:
datetime: The current time in UTC.
"""
- return datetime.datetime.utcnow()
+ # We used datetime.utcnow() before, since it's deprecated from python 3.12,
+ # we are using datetime.now(timezone.utc) now. "utcnow()" is offset-native
+ # (no timezone info), but "now()" is offset-aware (with timezone info).
+ # This will cause datetime comparison problem. For backward compatibility,
+ # we need to remove the timezone info.
+ now = datetime.datetime.now(datetime.timezone.utc)
+ now = now.replace(tzinfo=None)
+ return now
def datetime_to_secs(value):
| {"golden_diff": "diff --git a/google/auth/_helpers.py b/google/auth/_helpers.py\n--- a/google/auth/_helpers.py\n+++ b/google/auth/_helpers.py\n@@ -92,7 +92,14 @@\n Returns:\n datetime: The current time in UTC.\n \"\"\"\n- return datetime.datetime.utcnow()\n+ # We used datetime.utcnow() before, since it's deprecated from python 3.12,\n+ # we are using datetime.now(timezone.utc) now. \"utcnow()\" is offset-native\n+ # (no timezone info), but \"now()\" is offset-aware (with timezone info).\n+ # This will cause datetime comparison problem. For backward compatibility,\n+ # we need to remove the timezone info.\n+ now = datetime.datetime.now(datetime.timezone.utc)\n+ now = now.replace(tzinfo=None)\n+ return now\n \n \n def datetime_to_secs(value):\n", "issue": "utcnow is deprecated in python 3.12\n\r\n#### Environment details\r\n\r\n - OS: Linux\r\n - Python version: 3.12.0\r\n - pip version: 23.2.1\r\n - `google-auth` version: 2.9.1\r\n\r\n#### Issue\r\nHere is the related code\r\n\r\nhttps://github.com/googleapis/google-auth-library-python/blob/d2ab3afdb567850121fec7de1d86fb5fb0fa80ed/google/auth/_helpers.py#L89-L95\r\n\r\n\n", "before_files": [{"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Helper functions for commonly used utilities.\"\"\"\n\nimport base64\nimport calendar\nimport datetime\nfrom email.message import Message\nimport sys\nimport urllib\n\nfrom google.auth import exceptions\n\n# Token server doesn't provide a new a token when doing refresh unless the\n# token is expiring within 30 seconds, so refresh threshold should not be\n# more than 30 seconds. Otherwise auth lib will send tons of refresh requests\n# until 30 seconds before the expiration, and cause a spike of CPU usage.\nREFRESH_THRESHOLD = datetime.timedelta(seconds=20)\n\n\ndef copy_docstring(source_class):\n \"\"\"Decorator that copies a method's docstring from another class.\n\n Args:\n source_class (type): The class that has the documented method.\n\n Returns:\n Callable: A decorator that will copy the docstring of the same\n named method in the source class to the decorated method.\n \"\"\"\n\n def decorator(method):\n \"\"\"Decorator implementation.\n\n Args:\n method (Callable): The method to copy the docstring to.\n\n Returns:\n Callable: the same method passed in with an updated docstring.\n\n Raises:\n google.auth.exceptions.InvalidOperation: if the method already has a docstring.\n \"\"\"\n if method.__doc__:\n raise exceptions.InvalidOperation(\"Method already has a docstring.\")\n\n source_method = getattr(source_class, method.__name__)\n method.__doc__ = source_method.__doc__\n\n return method\n\n return decorator\n\n\ndef parse_content_type(header_value):\n \"\"\"Parse a 'content-type' header value to get just the plain media-type (without parameters).\n\n This is done using the class Message from email.message as suggested in PEP 594\n (because the cgi is now deprecated and will be removed in python 3.13,\n see https://peps.python.org/pep-0594/#cgi).\n\n Args:\n header_value (str): The value of a 'content-type' header as a string.\n\n Returns:\n str: A string with just the lowercase media-type from the parsed 'content-type' header.\n If the provided content-type is not parsable, returns 'text/plain',\n the default value for textual files.\n \"\"\"\n m = Message()\n m[\"content-type\"] = header_value\n return (\n m.get_content_type()\n ) # Despite the name, actually returns just the media-type\n\n\ndef utcnow():\n \"\"\"Returns the current UTC datetime.\n\n Returns:\n datetime: The current time in UTC.\n \"\"\"\n return datetime.datetime.utcnow()\n\n\ndef datetime_to_secs(value):\n \"\"\"Convert a datetime object to the number of seconds since the UNIX epoch.\n\n Args:\n value (datetime): The datetime to convert.\n\n Returns:\n int: The number of seconds since the UNIX epoch.\n \"\"\"\n return calendar.timegm(value.utctimetuple())\n\n\ndef to_bytes(value, encoding=\"utf-8\"):\n \"\"\"Converts a string value to bytes, if necessary.\n\n Args:\n value (Union[str, bytes]): The value to be converted.\n encoding (str): The encoding to use to convert unicode to bytes.\n Defaults to \"utf-8\".\n\n Returns:\n bytes: The original value converted to bytes (if unicode) or as\n passed in if it started out as bytes.\n\n Raises:\n google.auth.exceptions.InvalidValue: If the value could not be converted to bytes.\n \"\"\"\n result = value.encode(encoding) if isinstance(value, str) else value\n if isinstance(result, bytes):\n return result\n else:\n raise exceptions.InvalidValue(\n \"{0!r} could not be converted to bytes\".format(value)\n )\n\n\ndef from_bytes(value):\n \"\"\"Converts bytes to a string value, if necessary.\n\n Args:\n value (Union[str, bytes]): The value to be converted.\n\n Returns:\n str: The original value converted to unicode (if bytes) or as passed in\n if it started out as unicode.\n\n Raises:\n google.auth.exceptions.InvalidValue: If the value could not be converted to unicode.\n \"\"\"\n result = value.decode(\"utf-8\") if isinstance(value, bytes) else value\n if isinstance(result, str):\n return result\n else:\n raise exceptions.InvalidValue(\n \"{0!r} could not be converted to unicode\".format(value)\n )\n\n\ndef update_query(url, params, remove=None):\n \"\"\"Updates a URL's query parameters.\n\n Replaces any current values if they are already present in the URL.\n\n Args:\n url (str): The URL to update.\n params (Mapping[str, str]): A mapping of query parameter\n keys to values.\n remove (Sequence[str]): Parameters to remove from the query string.\n\n Returns:\n str: The URL with updated query parameters.\n\n Examples:\n\n >>> url = 'http://example.com?a=1'\n >>> update_query(url, {'a': '2'})\n http://example.com?a=2\n >>> update_query(url, {'b': '3'})\n http://example.com?a=1&b=3\n >> update_query(url, {'b': '3'}, remove=['a'])\n http://example.com?b=3\n\n \"\"\"\n if remove is None:\n remove = []\n\n # Split the URL into parts.\n parts = urllib.parse.urlparse(url)\n # Parse the query string.\n query_params = urllib.parse.parse_qs(parts.query)\n # Update the query parameters with the new parameters.\n query_params.update(params)\n # Remove any values specified in remove.\n query_params = {\n key: value for key, value in query_params.items() if key not in remove\n }\n # Re-encoded the query string.\n new_query = urllib.parse.urlencode(query_params, doseq=True)\n # Unsplit the url.\n new_parts = parts._replace(query=new_query)\n return urllib.parse.urlunparse(new_parts)\n\n\ndef scopes_to_string(scopes):\n \"\"\"Converts scope value to a string suitable for sending to OAuth 2.0\n authorization servers.\n\n Args:\n scopes (Sequence[str]): The sequence of scopes to convert.\n\n Returns:\n str: The scopes formatted as a single string.\n \"\"\"\n return \" \".join(scopes)\n\n\ndef string_to_scopes(scopes):\n \"\"\"Converts stringifed scopes value to a list.\n\n Args:\n scopes (Union[Sequence, str]): The string of space-separated scopes\n to convert.\n Returns:\n Sequence(str): The separated scopes.\n \"\"\"\n if not scopes:\n return []\n\n return scopes.split(\" \")\n\n\ndef padded_urlsafe_b64decode(value):\n \"\"\"Decodes base64 strings lacking padding characters.\n\n Google infrastructure tends to omit the base64 padding characters.\n\n Args:\n value (Union[str, bytes]): The encoded value.\n\n Returns:\n bytes: The decoded value\n \"\"\"\n b64string = to_bytes(value)\n padded = b64string + b\"=\" * (-len(b64string) % 4)\n return base64.urlsafe_b64decode(padded)\n\n\ndef unpadded_urlsafe_b64encode(value):\n \"\"\"Encodes base64 strings removing any padding characters.\n\n `rfc 7515`_ defines Base64url to NOT include any padding\n characters, but the stdlib doesn't do that by default.\n\n _rfc7515: https://tools.ietf.org/html/rfc7515#page-6\n\n Args:\n value (Union[str|bytes]): The bytes-like value to encode\n\n Returns:\n Union[str|bytes]: The encoded value\n \"\"\"\n return base64.urlsafe_b64encode(value).rstrip(b\"=\")\n\n\ndef is_python_3():\n \"\"\"Check if the Python interpreter is Python 2 or 3.\n\n Returns:\n bool: True if the Python interpreter is Python 3 and False otherwise.\n \"\"\"\n return sys.version_info > (3, 0)\n", "path": "google/auth/_helpers.py"}]} | 3,242 | 189 |
gh_patches_debug_15574 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-872 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Verbose output not shown unless -s is specified
I am running a test suite with hypothesis using py.test, when setting HYPOTHESIS_VERBOSITY_LEVEL=verbose environment variable I expected to see the intermediate results. However I need to specify -s when invokin py.test otherwise the intermediate results are suppressed.
Python 3.6.0a1
py.test 2.9.2
hypothesis 3.4.2
</issue>
<code>
[start of docs/conf.py]
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2017 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 # -*- coding: utf-8 -*-
19
20 from __future__ import division, print_function, absolute_import
21
22 # on_rtd is whether we are on readthedocs.org
23 import os
24 import sys
25 import datetime
26
27 from hypothesis import __version__
28
29 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
30
31 sys.path.append(
32 os.path.join(os.path.dirname(__file__), '..', 'src')
33 )
34
35
36 autodoc_member_order = 'bysource'
37
38 extensions = [
39 'sphinx.ext.autodoc',
40 'sphinx.ext.doctest',
41 'sphinx.ext.extlinks',
42 'sphinx.ext.viewcode',
43 'sphinx.ext.intersphinx',
44 ]
45
46 templates_path = ['_templates']
47
48 source_suffix = '.rst'
49
50 # The master toctree document.
51 master_doc = 'index'
52
53 # General information about the project.
54 project = u'Hypothesis'
55 copyright = u'2013-%s, David R. MacIver' % datetime.datetime.utcnow().year
56 author = u'David R. MacIver'
57
58 version = __version__
59 release = __version__
60
61 language = None
62
63 exclude_patterns = ['_build']
64
65 pygments_style = 'sphinx'
66
67 todo_include_todos = False
68
69 intersphinx_mapping = {
70 'python': ('https://docs.python.org/3/', None),
71 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
72 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None)
73 }
74
75 autodoc_mock_imports = ['numpy', 'pandas']
76
77 doctest_global_setup = '''
78 # Some standard imports
79 from hypothesis import *
80 from hypothesis.strategies import *
81 # Ensure that output (including from strategies) is deterministic
82 import random
83 random.seed(0)
84 # don't save examples
85 settings.register_profile('doctests', settings(database=None))
86 settings.load_profile('doctests')
87 import warnings
88 warnings.filterwarnings('error', category=HypothesisDeprecationWarning)
89 '''
90
91 # This config value must be a dictionary of external sites, mapping unique
92 # short alias names to a base URL and a prefix.
93 # See http://sphinx-doc.org/ext/extlinks.html
94 extlinks = {
95 'commit': ('https://github.com/HypothesisWorks/hypothesis-python/commit/%s', 'commit '),
96 'gh-file': ('https://github.com/HypothesisWorks/hypothesis-python/blob/master/%s', ''),
97 'gh-link': ('https://github.com/HypothesisWorks/hypothesis-python/%s', ''),
98 'issue': ('https://github.com/HypothesisWorks/hypothesis-python/issues/%s', 'issue #'),
99 'pull': ('https://github.com/HypothesisWorks/hypothesis-python/pulls/%s', 'pull request #'),
100 }
101
102 # -- Options for HTML output ----------------------------------------------
103
104 if not on_rtd: # only import and set the theme if we're building docs locally
105 import sphinx_rtd_theme
106 html_theme = 'sphinx_rtd_theme'
107 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
108
109 html_static_path = ['_static']
110
111 htmlhelp_basename = 'Hypothesisdoc'
112
113 # -- Options for LaTeX output ---------------------------------------------
114
115 latex_elements = {
116 }
117
118 latex_documents = [
119 (master_doc, 'Hypothesis.tex', u'Hypothesis Documentation',
120 u'David R. MacIver', 'manual'),
121 ]
122
123 man_pages = [
124 (master_doc, 'hypothesis', u'Hypothesis Documentation',
125 [author], 1)
126 ]
127
128 texinfo_documents = [
129 (master_doc, 'Hypothesis', u'Hypothesis Documentation',
130 author, 'Hypothesis', 'One line description of project.',
131 'Miscellaneous'),
132 ]
133
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -69,7 +69,8 @@
intersphinx_mapping = {
'python': ('https://docs.python.org/3/', None),
'numpy': ('https://docs.scipy.org/doc/numpy/', None),
- 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None)
+ 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None),
+ 'pytest': ('https://docs.pytest.org/en/stable/', None),
}
autodoc_mock_imports = ['numpy', 'pandas']
@@ -127,6 +128,6 @@
texinfo_documents = [
(master_doc, 'Hypothesis', u'Hypothesis Documentation',
- author, 'Hypothesis', 'One line description of project.',
+ author, 'Hypothesis', 'Advanced property-based testing for Python.',
'Miscellaneous'),
]
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -69,7 +69,8 @@\n intersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy/', None),\n- 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None)\n+ 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None),\n+ 'pytest': ('https://docs.pytest.org/en/stable/', None),\n }\n \n autodoc_mock_imports = ['numpy', 'pandas']\n@@ -127,6 +128,6 @@\n \n texinfo_documents = [\n (master_doc, 'Hypothesis', u'Hypothesis Documentation',\n- author, 'Hypothesis', 'One line description of project.',\n+ author, 'Hypothesis', 'Advanced property-based testing for Python.',\n 'Miscellaneous'),\n ]\n", "issue": "Verbose output not shown unless -s is specified\nI am running a test suite with hypothesis using py.test, when setting HYPOTHESIS_VERBOSITY_LEVEL=verbose environment variable I expected to see the intermediate results. However I need to specify -s when invokin py.test otherwise the intermediate results are suppressed.\n\nPython 3.6.0a1\npy.test 2.9.2\nhypothesis 3.4.2\n\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2017 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\n# -*- coding: utf-8 -*-\n\nfrom __future__ import division, print_function, absolute_import\n\n# on_rtd is whether we are on readthedocs.org\nimport os\nimport sys\nimport datetime\n\nfrom hypothesis import __version__\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nsys.path.append(\n os.path.join(os.path.dirname(__file__), '..', 'src')\n)\n\n\nautodoc_member_order = 'bysource'\n\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.extlinks',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.intersphinx',\n]\n\ntemplates_path = ['_templates']\n\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Hypothesis'\ncopyright = u'2013-%s, David R. MacIver' % datetime.datetime.utcnow().year\nauthor = u'David R. MacIver'\n\nversion = __version__\nrelease = __version__\n\nlanguage = None\n\nexclude_patterns = ['_build']\n\npygments_style = 'sphinx'\n\ntodo_include_todos = False\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy/', None),\n 'pandas': ('https://pandas.pydata.org/pandas-docs/stable/', None)\n}\n\nautodoc_mock_imports = ['numpy', 'pandas']\n\ndoctest_global_setup = '''\n# Some standard imports\nfrom hypothesis import *\nfrom hypothesis.strategies import *\n# Ensure that output (including from strategies) is deterministic\nimport random\nrandom.seed(0)\n# don't save examples\nsettings.register_profile('doctests', settings(database=None))\nsettings.load_profile('doctests')\nimport warnings\nwarnings.filterwarnings('error', category=HypothesisDeprecationWarning)\n'''\n\n# This config value must be a dictionary of external sites, mapping unique\n# short alias names to a base URL and a prefix.\n# See http://sphinx-doc.org/ext/extlinks.html\nextlinks = {\n 'commit': ('https://github.com/HypothesisWorks/hypothesis-python/commit/%s', 'commit '),\n 'gh-file': ('https://github.com/HypothesisWorks/hypothesis-python/blob/master/%s', ''),\n 'gh-link': ('https://github.com/HypothesisWorks/hypothesis-python/%s', ''),\n 'issue': ('https://github.com/HypothesisWorks/hypothesis-python/issues/%s', 'issue #'),\n 'pull': ('https://github.com/HypothesisWorks/hypothesis-python/pulls/%s', 'pull request #'),\n}\n\n# -- Options for HTML output ----------------------------------------------\n\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_static_path = ['_static']\n\nhtmlhelp_basename = 'Hypothesisdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n}\n\nlatex_documents = [\n (master_doc, 'Hypothesis.tex', u'Hypothesis Documentation',\n u'David R. MacIver', 'manual'),\n]\n\nman_pages = [\n (master_doc, 'hypothesis', u'Hypothesis Documentation',\n [author], 1)\n]\n\ntexinfo_documents = [\n (master_doc, 'Hypothesis', u'Hypothesis Documentation',\n author, 'Hypothesis', 'One line description of project.',\n 'Miscellaneous'),\n]\n", "path": "docs/conf.py"}]} | 1,915 | 228 |
gh_patches_debug_11949 | rasdani/github-patches | git_diff | cupy__cupy-2923 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Windows: module 'numpy' has no attribute 'complex256'
In file: [cupy/cupyx/scipy/ndimage/filters.py] (https://github.com/cupy/cupy/blob/master/cupyx/scipy/ndimage/filters.py)
Line 83: ` if input.dtype in (numpy.complex64, numpy.complex128, numpy.complex256):`
There is a check for numpy.complex256. On Windows there is no numpy.complex256, so this line leads to the following error:
module 'numpy' has no attribute 'complex256'
a simple solution could be to use the function [numpy.iscomplexobj
](https://docs.scipy.org/doc/numpy/reference/generated/numpy.iscomplexobj.html#numpy.iscomplexobj) or (maybe faster) to stringify the dtype of the array and check for substring "complex"
</issue>
<code>
[start of cupyx/scipy/ndimage/filters.py]
1 import numpy
2
3 import cupy
4 from cupy import util
5
6
7 def correlate(input, weights, output=None, mode='reflect', cval=0.0, origin=0):
8 """Multi-dimensional correlate.
9
10 The array is correlated with the given kernel.
11
12 Args:
13 input (cupy.ndarray): The input array.
14 weights (cupy.ndarray): Array of weights, same number of dimensions as
15 input
16 output (cupy.ndarray, dtype or None): The array in which to place the
17 output.
18 mode (str): The array borders are handled according to the given mode
19 (``'reflect'``, ``'constant'``, ``'nearest'``, ``'mirror'``,
20 ``'wrap'``). Default is ``'reflect'``.
21 cval (scalar): Value to fill past edges of input if mode is
22 ``constant``. Default is ``0.0``.
23 origin (scalar or tuple of scalar): The origin parameter controls the
24 placement of the filter, relative to the center of the current
25 element of the input. Default of 0 is equivalent to
26 ``(0,)*input.ndim``.
27
28 Returns:
29 cupy.ndarray: The result of correlate.
30
31 .. seealso:: :func:`scipy.ndimage.correlate`
32 """
33 return _correlate_or_convolve(input, weights, output, mode, cval, origin,
34 False)
35
36
37 def convolve(input, weights, output=None, mode='reflect', cval=0.0, origin=0):
38 """Multi-dimensional convolution.
39
40 The array is convolved with the given kernel.
41
42 Args:
43 input (cupy.ndarray): The input array.
44 weights (cupy.ndarray): Array of weights, same number of dimensions as
45 input
46 output (cupy.ndarray, dtype or None): The array in which to place the
47 output.
48 mode (str): The array borders are handled according to the given mode
49 (``'reflect'``, ``'constant'``, ``'nearest'``, ``'mirror'``,
50 ``'wrap'``). Default is ``'reflect'``.
51 cval (scalar): Value to fill past edges of input if mode is
52 ``constant``. Default is ``0.0``.
53 origin (scalar or tuple of scalar): The origin parameter controls the
54 placement of the filter, relative to the center of the current
55 element of the input. Default of 0 is equivalent to
56 ``(0,)*input.ndim``.
57
58 Returns:
59 cupy.ndarray: The result of convolution.
60
61 .. seealso:: :func:`scipy.ndimage.convolve`
62 """
63 return _correlate_or_convolve(input, weights, output, mode, cval, origin,
64 True)
65
66
67 def _get_output(output, input, shape=None):
68 if shape is None:
69 shape = input.shape
70 if isinstance(output, cupy.ndarray):
71 if output.shape != tuple(shape):
72 raise ValueError('output shape is not correct')
73 else:
74 dtype = output
75 if dtype is None:
76 dtype = input.dtype
77 output = cupy.zeros(shape, dtype)
78 return output
79
80
81 def _correlate_or_convolve(input, weights, output, mode, cval, origin,
82 convolution):
83 if input.dtype in (numpy.complex64, numpy.complex128, numpy.complex256):
84 raise TypeError('Complex type not supported.')
85 if not hasattr(origin, '__getitem__'):
86 origin = [origin, ] * input.ndim
87 else:
88 origin = list(origin)
89 wshape = [ii for ii in weights.shape if ii > 0]
90 if len(wshape) != input.ndim:
91 raise RuntimeError('filter weights array has incorrect shape.')
92 if convolution:
93 weights = weights[tuple([slice(None, None, -1)] * weights.ndim)]
94 for ii in range(len(origin)):
95 origin[ii] = -origin[ii]
96 if weights.shape[ii] % 2 == 0:
97 origin[ii] -= 1
98 for _origin, lenw in zip(origin, wshape):
99 if (lenw // 2 + _origin < 0) or (lenw // 2 + _origin >= lenw):
100 raise ValueError('invalid origin')
101 if mode not in ('reflect', 'constant', 'nearest', 'mirror', 'wrap'):
102 msg = 'boundary mode not supported (actual: {}).'.format(mode)
103 raise RuntimeError(msg)
104
105 output = _get_output(output, input)
106 if weights.size == 0:
107 return output
108 input = cupy.ascontiguousarray(input)
109 weights = cupy.ascontiguousarray(weights, cupy.float64)
110 return _get_correlete_kernel(
111 input.ndim, mode, cval, input.shape, tuple(wshape), tuple(origin))(
112 input, weights, output)
113
114
115 def _generate_boundary_condition_ops(mode, ix, xsize):
116 if mode == 'reflect':
117 ops = '''
118 if ({ix} < 0) {{
119 {ix} = - 1 - {ix};
120 }}
121 {ix} %= {xsize} * 2;
122 {ix} = min({ix}, 2 * {xsize} - 1 - {ix});'''.format(ix=ix, xsize=xsize)
123 elif mode == 'mirror':
124 ops = '''
125 if ({ix} < 0) {{
126 {ix} = - {ix};
127 }}
128 if ({xsize} == 1) {{
129 {ix} = 0;
130 }} else {{
131 {ix} = 1 + ({ix} - 1) % (({xsize} - 1) * 2);
132 {ix} = min({ix}, 2 * {xsize} - 2 - {ix});
133 }}'''.format(ix=ix, xsize=xsize)
134 elif mode == 'nearest':
135 ops = '''
136 {ix} = min(max({ix}, 0), {xsize} - 1);'''.format(ix=ix, xsize=xsize)
137 elif mode == 'wrap':
138 ops = '''
139 if ({ix} < 0) {{
140 {ix} += (1 - ({ix} / {xsize})) * {xsize};
141 }}
142 {ix} %= {xsize};'''.format(ix=ix, xsize=xsize)
143 elif mode == 'constant':
144 ops = '''
145 if ({ix} >= {xsize}) {{
146 {ix} = -1;
147 }}'''.format(ix=ix, xsize=xsize)
148 return ops
149
150
151 def _generate_correlete_kernel(ndim, mode, cval, xshape, wshape, origin):
152 in_params = 'raw X x, raw W w'
153 out_params = 'Y y'
154
155 ops = []
156 ops.append('const int sx_{} = 1;'.format(ndim-1))
157 for j in range(ndim-1, 0, -1):
158 ops.append('int sx_{jm} = sx_{j} * {xsize_j};'.
159 format(jm=j-1, j=j, xsize_j=xshape[j]))
160 ops.append('int _i = i;')
161 for j in range(ndim-1, -1, -1):
162 ops.append('int cx_{j} = _i % {xsize} - ({wsize} / 2) - ({origin});'
163 .format(j=j, xsize=xshape[j], wsize=wshape[j],
164 origin=origin[j]))
165 if (j > 0):
166 ops.append('_i /= {xsize};'.format(xsize=xshape[j]))
167 ops.append('W sum = (W)0;')
168 ops.append('int iw = 0;')
169
170 for j in range(ndim):
171 ops.append('''
172 for (int iw_{j} = 0; iw_{j} < {wsize}; iw_{j}++)
173 {{
174 int ix_{j} = cx_{j} + iw_{j};'''.format(j=j, wsize=wshape[j]))
175 ixvar = 'ix_{}'.format(j)
176 ops.append(_generate_boundary_condition_ops(mode, ixvar, xshape[j]))
177 ops.append(' ix_{j} *= sx_{j};'.format(j=j))
178
179 _cond = ' || '.join(['(ix_{0} < 0)'.format(j) for j in range(ndim)])
180 _expr = ' + '.join(['ix_{0}'.format(j) for j in range(ndim)])
181 ops.append('''
182 if ({cond}) {{
183 sum += (W){cval} * w[iw];
184 }} else {{
185 int ix = {expr};
186 sum += (W)x[ix] * w[iw];
187 }}
188 iw += 1;'''.format(cond=_cond, expr=_expr, cval=cval))
189
190 ops.append('} ' * ndim)
191 ops.append('y = (Y)sum;')
192 operation = '\n'.join(ops)
193
194 name = 'cupy_ndimage_correlate_{}d_{}_x{}_w{}'.format(
195 ndim, mode, '_'.join(['{}'.format(j) for j in xshape]),
196 '_'.join(['{}'.format(j) for j in wshape]))
197 return in_params, out_params, operation, name
198
199
200 @util.memoize()
201 def _get_correlete_kernel(ndim, mode, cval, xshape, wshape, origin):
202 # weights is always casted to float64 in order to get an output compatible
203 # with SciPy, thought float32 might be sufficient when input dtype is low
204 # precision.
205 in_params, out_params, operation, name = _generate_correlete_kernel(
206 ndim, mode, cval, xshape, wshape, origin)
207 return cupy.ElementwiseKernel(in_params, out_params, operation, name)
208
[end of cupyx/scipy/ndimage/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupyx/scipy/ndimage/filters.py b/cupyx/scipy/ndimage/filters.py
--- a/cupyx/scipy/ndimage/filters.py
+++ b/cupyx/scipy/ndimage/filters.py
@@ -1,5 +1,3 @@
-import numpy
-
import cupy
from cupy import util
@@ -80,7 +78,7 @@
def _correlate_or_convolve(input, weights, output, mode, cval, origin,
convolution):
- if input.dtype in (numpy.complex64, numpy.complex128, numpy.complex256):
+ if input.dtype.kind == 'c':
raise TypeError('Complex type not supported.')
if not hasattr(origin, '__getitem__'):
origin = [origin, ] * input.ndim
| {"golden_diff": "diff --git a/cupyx/scipy/ndimage/filters.py b/cupyx/scipy/ndimage/filters.py\n--- a/cupyx/scipy/ndimage/filters.py\n+++ b/cupyx/scipy/ndimage/filters.py\n@@ -1,5 +1,3 @@\n-import numpy\n-\n import cupy\n from cupy import util\n \n@@ -80,7 +78,7 @@\n \n def _correlate_or_convolve(input, weights, output, mode, cval, origin,\n convolution):\n- if input.dtype in (numpy.complex64, numpy.complex128, numpy.complex256):\n+ if input.dtype.kind == 'c':\n raise TypeError('Complex type not supported.')\n if not hasattr(origin, '__getitem__'):\n origin = [origin, ] * input.ndim\n", "issue": "Windows: module 'numpy' has no attribute 'complex256'\nIn file: [cupy/cupyx/scipy/ndimage/filters.py] (https://github.com/cupy/cupy/blob/master/cupyx/scipy/ndimage/filters.py)\r\n\r\nLine 83: ` if input.dtype in (numpy.complex64, numpy.complex128, numpy.complex256):`\r\n\r\nThere is a check for numpy.complex256. On Windows there is no numpy.complex256, so this line leads to the following error:\r\n\r\nmodule 'numpy' has no attribute 'complex256'\r\n\r\na simple solution could be to use the function [numpy.iscomplexobj\r\n](https://docs.scipy.org/doc/numpy/reference/generated/numpy.iscomplexobj.html#numpy.iscomplexobj) or (maybe faster) to stringify the dtype of the array and check for substring \"complex\"\n", "before_files": [{"content": "import numpy\n\nimport cupy\nfrom cupy import util\n\n\ndef correlate(input, weights, output=None, mode='reflect', cval=0.0, origin=0):\n \"\"\"Multi-dimensional correlate.\n\n The array is correlated with the given kernel.\n\n Args:\n input (cupy.ndarray): The input array.\n weights (cupy.ndarray): Array of weights, same number of dimensions as\n input\n output (cupy.ndarray, dtype or None): The array in which to place the\n output.\n mode (str): The array borders are handled according to the given mode\n (``'reflect'``, ``'constant'``, ``'nearest'``, ``'mirror'``,\n ``'wrap'``). Default is ``'reflect'``.\n cval (scalar): Value to fill past edges of input if mode is\n ``constant``. Default is ``0.0``.\n origin (scalar or tuple of scalar): The origin parameter controls the\n placement of the filter, relative to the center of the current\n element of the input. Default of 0 is equivalent to\n ``(0,)*input.ndim``.\n\n Returns:\n cupy.ndarray: The result of correlate.\n\n .. seealso:: :func:`scipy.ndimage.correlate`\n \"\"\"\n return _correlate_or_convolve(input, weights, output, mode, cval, origin,\n False)\n\n\ndef convolve(input, weights, output=None, mode='reflect', cval=0.0, origin=0):\n \"\"\"Multi-dimensional convolution.\n\n The array is convolved with the given kernel.\n\n Args:\n input (cupy.ndarray): The input array.\n weights (cupy.ndarray): Array of weights, same number of dimensions as\n input\n output (cupy.ndarray, dtype or None): The array in which to place the\n output.\n mode (str): The array borders are handled according to the given mode\n (``'reflect'``, ``'constant'``, ``'nearest'``, ``'mirror'``,\n ``'wrap'``). Default is ``'reflect'``.\n cval (scalar): Value to fill past edges of input if mode is\n ``constant``. Default is ``0.0``.\n origin (scalar or tuple of scalar): The origin parameter controls the\n placement of the filter, relative to the center of the current\n element of the input. Default of 0 is equivalent to\n ``(0,)*input.ndim``.\n\n Returns:\n cupy.ndarray: The result of convolution.\n\n .. seealso:: :func:`scipy.ndimage.convolve`\n \"\"\"\n return _correlate_or_convolve(input, weights, output, mode, cval, origin,\n True)\n\n\ndef _get_output(output, input, shape=None):\n if shape is None:\n shape = input.shape\n if isinstance(output, cupy.ndarray):\n if output.shape != tuple(shape):\n raise ValueError('output shape is not correct')\n else:\n dtype = output\n if dtype is None:\n dtype = input.dtype\n output = cupy.zeros(shape, dtype)\n return output\n\n\ndef _correlate_or_convolve(input, weights, output, mode, cval, origin,\n convolution):\n if input.dtype in (numpy.complex64, numpy.complex128, numpy.complex256):\n raise TypeError('Complex type not supported.')\n if not hasattr(origin, '__getitem__'):\n origin = [origin, ] * input.ndim\n else:\n origin = list(origin)\n wshape = [ii for ii in weights.shape if ii > 0]\n if len(wshape) != input.ndim:\n raise RuntimeError('filter weights array has incorrect shape.')\n if convolution:\n weights = weights[tuple([slice(None, None, -1)] * weights.ndim)]\n for ii in range(len(origin)):\n origin[ii] = -origin[ii]\n if weights.shape[ii] % 2 == 0:\n origin[ii] -= 1\n for _origin, lenw in zip(origin, wshape):\n if (lenw // 2 + _origin < 0) or (lenw // 2 + _origin >= lenw):\n raise ValueError('invalid origin')\n if mode not in ('reflect', 'constant', 'nearest', 'mirror', 'wrap'):\n msg = 'boundary mode not supported (actual: {}).'.format(mode)\n raise RuntimeError(msg)\n\n output = _get_output(output, input)\n if weights.size == 0:\n return output\n input = cupy.ascontiguousarray(input)\n weights = cupy.ascontiguousarray(weights, cupy.float64)\n return _get_correlete_kernel(\n input.ndim, mode, cval, input.shape, tuple(wshape), tuple(origin))(\n input, weights, output)\n\n\ndef _generate_boundary_condition_ops(mode, ix, xsize):\n if mode == 'reflect':\n ops = '''\n if ({ix} < 0) {{\n {ix} = - 1 - {ix};\n }}\n {ix} %= {xsize} * 2;\n {ix} = min({ix}, 2 * {xsize} - 1 - {ix});'''.format(ix=ix, xsize=xsize)\n elif mode == 'mirror':\n ops = '''\n if ({ix} < 0) {{\n {ix} = - {ix};\n }}\n if ({xsize} == 1) {{\n {ix} = 0;\n }} else {{\n {ix} = 1 + ({ix} - 1) % (({xsize} - 1) * 2);\n {ix} = min({ix}, 2 * {xsize} - 2 - {ix});\n }}'''.format(ix=ix, xsize=xsize)\n elif mode == 'nearest':\n ops = '''\n {ix} = min(max({ix}, 0), {xsize} - 1);'''.format(ix=ix, xsize=xsize)\n elif mode == 'wrap':\n ops = '''\n if ({ix} < 0) {{\n {ix} += (1 - ({ix} / {xsize})) * {xsize};\n }}\n {ix} %= {xsize};'''.format(ix=ix, xsize=xsize)\n elif mode == 'constant':\n ops = '''\n if ({ix} >= {xsize}) {{\n {ix} = -1;\n }}'''.format(ix=ix, xsize=xsize)\n return ops\n\n\ndef _generate_correlete_kernel(ndim, mode, cval, xshape, wshape, origin):\n in_params = 'raw X x, raw W w'\n out_params = 'Y y'\n\n ops = []\n ops.append('const int sx_{} = 1;'.format(ndim-1))\n for j in range(ndim-1, 0, -1):\n ops.append('int sx_{jm} = sx_{j} * {xsize_j};'.\n format(jm=j-1, j=j, xsize_j=xshape[j]))\n ops.append('int _i = i;')\n for j in range(ndim-1, -1, -1):\n ops.append('int cx_{j} = _i % {xsize} - ({wsize} / 2) - ({origin});'\n .format(j=j, xsize=xshape[j], wsize=wshape[j],\n origin=origin[j]))\n if (j > 0):\n ops.append('_i /= {xsize};'.format(xsize=xshape[j]))\n ops.append('W sum = (W)0;')\n ops.append('int iw = 0;')\n\n for j in range(ndim):\n ops.append('''\n for (int iw_{j} = 0; iw_{j} < {wsize}; iw_{j}++)\n {{\n int ix_{j} = cx_{j} + iw_{j};'''.format(j=j, wsize=wshape[j]))\n ixvar = 'ix_{}'.format(j)\n ops.append(_generate_boundary_condition_ops(mode, ixvar, xshape[j]))\n ops.append(' ix_{j} *= sx_{j};'.format(j=j))\n\n _cond = ' || '.join(['(ix_{0} < 0)'.format(j) for j in range(ndim)])\n _expr = ' + '.join(['ix_{0}'.format(j) for j in range(ndim)])\n ops.append('''\n if ({cond}) {{\n sum += (W){cval} * w[iw];\n }} else {{\n int ix = {expr};\n sum += (W)x[ix] * w[iw];\n }}\n iw += 1;'''.format(cond=_cond, expr=_expr, cval=cval))\n\n ops.append('} ' * ndim)\n ops.append('y = (Y)sum;')\n operation = '\\n'.join(ops)\n\n name = 'cupy_ndimage_correlate_{}d_{}_x{}_w{}'.format(\n ndim, mode, '_'.join(['{}'.format(j) for j in xshape]),\n '_'.join(['{}'.format(j) for j in wshape]))\n return in_params, out_params, operation, name\n\n\[email protected]()\ndef _get_correlete_kernel(ndim, mode, cval, xshape, wshape, origin):\n # weights is always casted to float64 in order to get an output compatible\n # with SciPy, thought float32 might be sufficient when input dtype is low\n # precision.\n in_params, out_params, operation, name = _generate_correlete_kernel(\n ndim, mode, cval, xshape, wshape, origin)\n return cupy.ElementwiseKernel(in_params, out_params, operation, name)\n", "path": "cupyx/scipy/ndimage/filters.py"}]} | 3,416 | 184 |
gh_patches_debug_9373 | rasdani/github-patches | git_diff | PaddlePaddle__PaddleSeg-1788 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kappa系数大于1
版本:Paddle2.2,PaddleSeg版本:2.3.0。在评估模型时发现**Kappa系数大于1**。请问这是什么情况?
如下图所示:

配置文件如下:
```batch_size: 2
iters: 80000
model:
type: DeepLabV3P
backbone:
type: ResNet101_vd
output_stride: 8
multi_grid: [1, 2, 4]
pretrained: https://bj.bcebos.com/paddleseg/dygraph/resnet101_vd_ssld.tar.gz
num_classes: 2
backbone_indices: [0, 3]
aspp_ratios: [1, 12, 24, 36]
aspp_out_channels: 256
align_corners: False
pretrained: null
train_dataset:
type: Dataset
dataset_root: data/seg_data
train_path: data/seg_data/train.txt
num_classes: 2
transforms:
- type: ResizeStepScaling
min_scale_factor: 0.5
max_scale_factor: 2.0
scale_step_size: 0.25
- type: RandomPaddingCrop
crop_size: [512, 512]
- type: RandomHorizontalFlip
- type: RandomDistort
brightness_range: 0.4
contrast_range: 0.4
saturation_range: 0.4
- type: Normalize
mode: train
val_dataset:
type: Dataset
dataset_root: data/seg_data
val_path: data/seg_data/val.txt
num_classes: 2
transforms:
- type: Resize
target_size: [512, 512]
- type: Normalize
mode: val
optimizer:
type: sgd
momentum: 0.9
weight_decay: 4.0e-5
lr_scheduler:
type: PolynomialDecay
learning_rate: 0.01
end_lr: 0
power: 0.9
loss:
types:
- type: CrossEntropyLoss
coef: [1]
```
</issue>
<code>
[start of paddleseg/utils/metrics.py]
1 # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numpy as np
16 import paddle
17 import paddle.nn.functional as F
18 import sklearn.metrics as skmetrics
19
20
21 def calculate_area(pred, label, num_classes, ignore_index=255):
22 """
23 Calculate intersect, prediction and label area
24
25 Args:
26 pred (Tensor): The prediction by model.
27 label (Tensor): The ground truth of image.
28 num_classes (int): The unique number of target classes.
29 ignore_index (int): Specifies a target value that is ignored. Default: 255.
30
31 Returns:
32 Tensor: The intersection area of prediction and the ground on all class.
33 Tensor: The prediction area on all class.
34 Tensor: The ground truth area on all class
35 """
36 if len(pred.shape) == 4:
37 pred = paddle.squeeze(pred, axis=1)
38 if len(label.shape) == 4:
39 label = paddle.squeeze(label, axis=1)
40 if not pred.shape == label.shape:
41 raise ValueError('Shape of `pred` and `label should be equal, '
42 'but there are {} and {}.'.format(
43 pred.shape, label.shape))
44 pred_area = []
45 label_area = []
46 intersect_area = []
47 mask = label != ignore_index
48
49 for i in range(num_classes):
50 pred_i = paddle.logical_and(pred == i, mask)
51 label_i = label == i
52 intersect_i = paddle.logical_and(pred_i, label_i)
53 pred_area.append(paddle.sum(paddle.cast(pred_i, "int32")))
54 label_area.append(paddle.sum(paddle.cast(label_i, "int32")))
55 intersect_area.append(paddle.sum(paddle.cast(intersect_i, "int32")))
56
57 pred_area = paddle.concat(pred_area)
58 label_area = paddle.concat(label_area)
59 intersect_area = paddle.concat(intersect_area)
60
61 return intersect_area, pred_area, label_area
62
63
64 def auc_roc(logits, label, num_classes, ignore_index=None):
65 """
66 Calculate area under the roc curve
67
68 Args:
69 logits (Tensor): The prediction by model on testset, of shape (N,C,H,W) .
70 label (Tensor): The ground truth of image. (N,1,H,W)
71 num_classes (int): The unique number of target classes.
72 ignore_index (int): Specifies a target value that is ignored. Default: 255.
73
74 Returns:
75 auc_roc(float): The area under roc curve
76 """
77 if ignore_index or len(np.unique(label)) > num_classes:
78 raise RuntimeError('labels with ignore_index is not supported yet.')
79
80 if len(label.shape) != 4:
81 raise ValueError(
82 'The shape of label is not 4 dimension as (N, C, H, W), it is {}'.
83 format(label.shape))
84
85 if len(logits.shape) != 4:
86 raise ValueError(
87 'The shape of logits is not 4 dimension as (N, C, H, W), it is {}'.
88 format(logits.shape))
89
90 N, C, H, W = logits.shape
91 logits = np.transpose(logits, (1, 0, 2, 3))
92 logits = logits.reshape([C, N * H * W]).transpose([1, 0])
93
94 label = np.transpose(label, (1, 0, 2, 3))
95 label = label.reshape([1, N * H * W]).squeeze()
96
97 if not logits.shape[0] == label.shape[0]:
98 raise ValueError('length of `logit` and `label` should be equal, '
99 'but they are {} and {}.'.format(
100 logits.shape[0], label.shape[0]))
101
102 if num_classes == 2:
103 auc = skmetrics.roc_auc_score(label, logits[:, 1])
104 else:
105 auc = skmetrics.roc_auc_score(label, logits, multi_class='ovr')
106
107 return auc
108
109
110 def mean_iou(intersect_area, pred_area, label_area):
111 """
112 Calculate iou.
113
114 Args:
115 intersect_area (Tensor): The intersection area of prediction and ground truth on all classes.
116 pred_area (Tensor): The prediction area on all classes.
117 label_area (Tensor): The ground truth area on all classes.
118
119 Returns:
120 np.ndarray: iou on all classes.
121 float: mean iou of all classes.
122 """
123 intersect_area = intersect_area.numpy()
124 pred_area = pred_area.numpy()
125 label_area = label_area.numpy()
126 union = pred_area + label_area - intersect_area
127 class_iou = []
128 for i in range(len(intersect_area)):
129 if union[i] == 0:
130 iou = 0
131 else:
132 iou = intersect_area[i] / union[i]
133 class_iou.append(iou)
134 miou = np.mean(class_iou)
135 return np.array(class_iou), miou
136
137
138 def dice(intersect_area, pred_area, label_area):
139 """
140 Calculate DICE.
141
142 Args:
143 intersect_area (Tensor): The intersection area of prediction and ground truth on all classes.
144 pred_area (Tensor): The prediction area on all classes.
145 label_area (Tensor): The ground truth area on all classes.
146
147 Returns:
148 np.ndarray: DICE on all classes.
149 float: mean DICE of all classes.
150 """
151 intersect_area = intersect_area.numpy()
152 pred_area = pred_area.numpy()
153 label_area = label_area.numpy()
154 union = pred_area + label_area
155 class_dice = []
156 for i in range(len(intersect_area)):
157 if union[i] == 0:
158 dice = 0
159 else:
160 dice = (2 * intersect_area[i]) / union[i]
161 class_dice.append(dice)
162 mdice = np.mean(class_dice)
163 return np.array(class_dice), mdice
164
165
166 def accuracy(intersect_area, pred_area):
167 """
168 Calculate accuracy
169
170 Args:
171 intersect_area (Tensor): The intersection area of prediction and ground truth on all classes..
172 pred_area (Tensor): The prediction area on all classes.
173
174 Returns:
175 np.ndarray: accuracy on all classes.
176 float: mean accuracy.
177 """
178 intersect_area = intersect_area.numpy()
179 pred_area = pred_area.numpy()
180 class_acc = []
181 for i in range(len(intersect_area)):
182 if pred_area[i] == 0:
183 acc = 0
184 else:
185 acc = intersect_area[i] / pred_area[i]
186 class_acc.append(acc)
187 macc = np.sum(intersect_area) / np.sum(pred_area)
188 return np.array(class_acc), macc
189
190
191 def kappa(intersect_area, pred_area, label_area):
192 """
193 Calculate kappa coefficient
194
195 Args:
196 intersect_area (Tensor): The intersection area of prediction and ground truth on all classes..
197 pred_area (Tensor): The prediction area on all classes.
198 label_area (Tensor): The ground truth area on all classes.
199
200 Returns:
201 float: kappa coefficient.
202 """
203 intersect_area = intersect_area.numpy()
204 pred_area = pred_area.numpy()
205 label_area = label_area.numpy()
206 total_area = np.sum(label_area)
207 po = np.sum(intersect_area) / total_area
208 pe = np.sum(pred_area * label_area) / (total_area * total_area)
209 kappa = (po - pe) / (1 - pe)
210 return kappa
211
[end of paddleseg/utils/metrics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/paddleseg/utils/metrics.py b/paddleseg/utils/metrics.py
--- a/paddleseg/utils/metrics.py
+++ b/paddleseg/utils/metrics.py
@@ -200,9 +200,9 @@
Returns:
float: kappa coefficient.
"""
- intersect_area = intersect_area.numpy()
- pred_area = pred_area.numpy()
- label_area = label_area.numpy()
+ intersect_area = intersect_area.numpy().astype(np.float64)
+ pred_area = pred_area.numpy().astype(np.float64)
+ label_area = label_area.numpy().astype(np.float64)
total_area = np.sum(label_area)
po = np.sum(intersect_area) / total_area
pe = np.sum(pred_area * label_area) / (total_area * total_area)
| {"golden_diff": "diff --git a/paddleseg/utils/metrics.py b/paddleseg/utils/metrics.py\n--- a/paddleseg/utils/metrics.py\n+++ b/paddleseg/utils/metrics.py\n@@ -200,9 +200,9 @@\n Returns:\n float: kappa coefficient.\n \"\"\"\n- intersect_area = intersect_area.numpy()\n- pred_area = pred_area.numpy()\n- label_area = label_area.numpy()\n+ intersect_area = intersect_area.numpy().astype(np.float64)\n+ pred_area = pred_area.numpy().astype(np.float64)\n+ label_area = label_area.numpy().astype(np.float64)\n total_area = np.sum(label_area)\n po = np.sum(intersect_area) / total_area\n pe = np.sum(pred_area * label_area) / (total_area * total_area)\n", "issue": "Kappa\u7cfb\u6570\u5927\u4e8e1\n\u7248\u672c\uff1aPaddle2.2\uff0cPaddleSeg\u7248\u672c\uff1a2.3.0\u3002\u5728\u8bc4\u4f30\u6a21\u578b\u65f6\u53d1\u73b0**Kappa\u7cfb\u6570\u5927\u4e8e1**\u3002\u8bf7\u95ee\u8fd9\u662f\u4ec0\u4e48\u60c5\u51b5\uff1f\r\n\u5982\u4e0b\u56fe\u6240\u793a\uff1a\r\n\r\n\r\n\u914d\u7f6e\u6587\u4ef6\u5982\u4e0b\uff1a\r\n```batch_size: 2\r\niters: 80000\r\n\r\nmodel:\r\n type: DeepLabV3P\r\n backbone:\r\n type: ResNet101_vd\r\n output_stride: 8\r\n multi_grid: [1, 2, 4]\r\n pretrained: https://bj.bcebos.com/paddleseg/dygraph/resnet101_vd_ssld.tar.gz\r\n num_classes: 2\r\n backbone_indices: [0, 3]\r\n aspp_ratios: [1, 12, 24, 36]\r\n aspp_out_channels: 256\r\n align_corners: False\r\n pretrained: null\r\n\r\n\r\ntrain_dataset:\r\n type: Dataset\r\n dataset_root: data/seg_data\r\n train_path: data/seg_data/train.txt\r\n num_classes: 2\r\n transforms:\r\n - type: ResizeStepScaling\r\n min_scale_factor: 0.5\r\n max_scale_factor: 2.0\r\n scale_step_size: 0.25\r\n - type: RandomPaddingCrop\r\n crop_size: [512, 512]\r\n - type: RandomHorizontalFlip\r\n - type: RandomDistort\r\n brightness_range: 0.4\r\n contrast_range: 0.4\r\n saturation_range: 0.4\r\n - type: Normalize\r\n mode: train\r\n\r\nval_dataset:\r\n type: Dataset\r\n dataset_root: data/seg_data\r\n val_path: data/seg_data/val.txt\r\n num_classes: 2\r\n transforms:\r\n - type: Resize\r\n target_size: [512, 512]\r\n - type: Normalize\r\n mode: val\r\n\r\n\r\noptimizer:\r\n type: sgd\r\n momentum: 0.9\r\n weight_decay: 4.0e-5\r\n\r\nlr_scheduler:\r\n type: PolynomialDecay\r\n learning_rate: 0.01\r\n end_lr: 0\r\n power: 0.9\r\n\r\nloss:\r\n types:\r\n - type: CrossEntropyLoss\r\n coef: [1]\r\n```\n", "before_files": [{"content": "# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport paddle\nimport paddle.nn.functional as F\nimport sklearn.metrics as skmetrics\n\n\ndef calculate_area(pred, label, num_classes, ignore_index=255):\n \"\"\"\n Calculate intersect, prediction and label area\n\n Args:\n pred (Tensor): The prediction by model.\n label (Tensor): The ground truth of image.\n num_classes (int): The unique number of target classes.\n ignore_index (int): Specifies a target value that is ignored. Default: 255.\n\n Returns:\n Tensor: The intersection area of prediction and the ground on all class.\n Tensor: The prediction area on all class.\n Tensor: The ground truth area on all class\n \"\"\"\n if len(pred.shape) == 4:\n pred = paddle.squeeze(pred, axis=1)\n if len(label.shape) == 4:\n label = paddle.squeeze(label, axis=1)\n if not pred.shape == label.shape:\n raise ValueError('Shape of `pred` and `label should be equal, '\n 'but there are {} and {}.'.format(\n pred.shape, label.shape))\n pred_area = []\n label_area = []\n intersect_area = []\n mask = label != ignore_index\n\n for i in range(num_classes):\n pred_i = paddle.logical_and(pred == i, mask)\n label_i = label == i\n intersect_i = paddle.logical_and(pred_i, label_i)\n pred_area.append(paddle.sum(paddle.cast(pred_i, \"int32\")))\n label_area.append(paddle.sum(paddle.cast(label_i, \"int32\")))\n intersect_area.append(paddle.sum(paddle.cast(intersect_i, \"int32\")))\n\n pred_area = paddle.concat(pred_area)\n label_area = paddle.concat(label_area)\n intersect_area = paddle.concat(intersect_area)\n\n return intersect_area, pred_area, label_area\n\n\ndef auc_roc(logits, label, num_classes, ignore_index=None):\n \"\"\"\n Calculate area under the roc curve\n\n Args:\n logits (Tensor): The prediction by model on testset, of shape (N,C,H,W) .\n label (Tensor): The ground truth of image. (N,1,H,W)\n num_classes (int): The unique number of target classes.\n ignore_index (int): Specifies a target value that is ignored. Default: 255.\n\n Returns:\n auc_roc(float): The area under roc curve\n \"\"\"\n if ignore_index or len(np.unique(label)) > num_classes:\n raise RuntimeError('labels with ignore_index is not supported yet.')\n\n if len(label.shape) != 4:\n raise ValueError(\n 'The shape of label is not 4 dimension as (N, C, H, W), it is {}'.\n format(label.shape))\n\n if len(logits.shape) != 4:\n raise ValueError(\n 'The shape of logits is not 4 dimension as (N, C, H, W), it is {}'.\n format(logits.shape))\n\n N, C, H, W = logits.shape\n logits = np.transpose(logits, (1, 0, 2, 3))\n logits = logits.reshape([C, N * H * W]).transpose([1, 0])\n\n label = np.transpose(label, (1, 0, 2, 3))\n label = label.reshape([1, N * H * W]).squeeze()\n\n if not logits.shape[0] == label.shape[0]:\n raise ValueError('length of `logit` and `label` should be equal, '\n 'but they are {} and {}.'.format(\n logits.shape[0], label.shape[0]))\n\n if num_classes == 2:\n auc = skmetrics.roc_auc_score(label, logits[:, 1])\n else:\n auc = skmetrics.roc_auc_score(label, logits, multi_class='ovr')\n\n return auc\n\n\ndef mean_iou(intersect_area, pred_area, label_area):\n \"\"\"\n Calculate iou.\n\n Args:\n intersect_area (Tensor): The intersection area of prediction and ground truth on all classes.\n pred_area (Tensor): The prediction area on all classes.\n label_area (Tensor): The ground truth area on all classes.\n\n Returns:\n np.ndarray: iou on all classes.\n float: mean iou of all classes.\n \"\"\"\n intersect_area = intersect_area.numpy()\n pred_area = pred_area.numpy()\n label_area = label_area.numpy()\n union = pred_area + label_area - intersect_area\n class_iou = []\n for i in range(len(intersect_area)):\n if union[i] == 0:\n iou = 0\n else:\n iou = intersect_area[i] / union[i]\n class_iou.append(iou)\n miou = np.mean(class_iou)\n return np.array(class_iou), miou\n\n\ndef dice(intersect_area, pred_area, label_area):\n \"\"\"\n Calculate DICE.\n\n Args:\n intersect_area (Tensor): The intersection area of prediction and ground truth on all classes.\n pred_area (Tensor): The prediction area on all classes.\n label_area (Tensor): The ground truth area on all classes.\n\n Returns:\n np.ndarray: DICE on all classes.\n float: mean DICE of all classes.\n \"\"\"\n intersect_area = intersect_area.numpy()\n pred_area = pred_area.numpy()\n label_area = label_area.numpy()\n union = pred_area + label_area\n class_dice = []\n for i in range(len(intersect_area)):\n if union[i] == 0:\n dice = 0\n else:\n dice = (2 * intersect_area[i]) / union[i]\n class_dice.append(dice)\n mdice = np.mean(class_dice)\n return np.array(class_dice), mdice\n\n\ndef accuracy(intersect_area, pred_area):\n \"\"\"\n Calculate accuracy\n\n Args:\n intersect_area (Tensor): The intersection area of prediction and ground truth on all classes..\n pred_area (Tensor): The prediction area on all classes.\n\n Returns:\n np.ndarray: accuracy on all classes.\n float: mean accuracy.\n \"\"\"\n intersect_area = intersect_area.numpy()\n pred_area = pred_area.numpy()\n class_acc = []\n for i in range(len(intersect_area)):\n if pred_area[i] == 0:\n acc = 0\n else:\n acc = intersect_area[i] / pred_area[i]\n class_acc.append(acc)\n macc = np.sum(intersect_area) / np.sum(pred_area)\n return np.array(class_acc), macc\n\n\ndef kappa(intersect_area, pred_area, label_area):\n \"\"\"\n Calculate kappa coefficient\n\n Args:\n intersect_area (Tensor): The intersection area of prediction and ground truth on all classes..\n pred_area (Tensor): The prediction area on all classes.\n label_area (Tensor): The ground truth area on all classes.\n\n Returns:\n float: kappa coefficient.\n \"\"\"\n intersect_area = intersect_area.numpy()\n pred_area = pred_area.numpy()\n label_area = label_area.numpy()\n total_area = np.sum(label_area)\n po = np.sum(intersect_area) / total_area\n pe = np.sum(pred_area * label_area) / (total_area * total_area)\n kappa = (po - pe) / (1 - pe)\n return kappa\n", "path": "paddleseg/utils/metrics.py"}]} | 3,363 | 178 |
gh_patches_debug_47466 | rasdani/github-patches | git_diff | bokeh__bokeh-8634 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stocks Example is not working properly
https://github.com/bokeh/bokeh/tree/master/examples/app/stocks
The example suppose to change the stats according to the selected points. For some reason
def selection_change(attrname, old, new):
print('lol')
t1, t2 = ticker1.value, ticker2.value
data = get_data(t1, t2)
selected = source.selected.indices
if selected:
data = data.iloc[selected, :]
update_stats(data, t1, t2)
source.on_change('selected', selection_change)
The code never prints 'lol'.
</issue>
<code>
[start of examples/app/stocks/main.py]
1 ''' Create a simple stocks correlation dashboard.
2
3 Choose stocks to compare in the drop down widgets, and make selections
4 on the plots to update the summary and histograms accordingly.
5
6 .. note::
7 Running this example requires downloading sample data. See
8 the included `README`_ for more information.
9
10 Use the ``bokeh serve`` command to run the example by executing:
11
12 bokeh serve stocks
13
14 at your command prompt. Then navigate to the URL
15
16 http://localhost:5006/stocks
17
18 .. _README: https://github.com/bokeh/bokeh/blob/master/examples/app/stocks/README.md
19
20 '''
21 try:
22 from functools import lru_cache
23 except ImportError:
24 # Python 2 does stdlib does not have lru_cache so let's just
25 # create a dummy decorator to avoid crashing
26 print ("WARNING: Cache for this example is available on Python 3 only.")
27 def lru_cache():
28 def dec(f):
29 def _(*args, **kws):
30 return f(*args, **kws)
31 return _
32 return dec
33
34 from os.path import dirname, join
35
36 import pandas as pd
37
38 from bokeh.io import curdoc
39 from bokeh.layouts import row, column
40 from bokeh.models import ColumnDataSource
41 from bokeh.models.widgets import PreText, Select
42 from bokeh.plotting import figure
43
44 DATA_DIR = join(dirname(__file__), 'daily')
45
46 DEFAULT_TICKERS = ['AAPL', 'GOOG', 'INTC', 'BRCM', 'YHOO']
47
48 def nix(val, lst):
49 return [x for x in lst if x != val]
50
51 @lru_cache()
52 def load_ticker(ticker):
53 fname = join(DATA_DIR, 'table_%s.csv' % ticker.lower())
54 data = pd.read_csv(fname, header=None, parse_dates=['date'],
55 names=['date', 'foo', 'o', 'h', 'l', 'c', 'v'])
56 data = data.set_index('date')
57 return pd.DataFrame({ticker: data.c, ticker+'_returns': data.c.diff()})
58
59 @lru_cache()
60 def get_data(t1, t2):
61 df1 = load_ticker(t1)
62 df2 = load_ticker(t2)
63 data = pd.concat([df1, df2], axis=1)
64 data = data.dropna()
65 data['t1'] = data[t1]
66 data['t2'] = data[t2]
67 data['t1_returns'] = data[t1+'_returns']
68 data['t2_returns'] = data[t2+'_returns']
69 return data
70
71 # set up widgets
72
73 stats = PreText(text='', width=500)
74 ticker1 = Select(value='AAPL', options=nix('GOOG', DEFAULT_TICKERS))
75 ticker2 = Select(value='GOOG', options=nix('AAPL', DEFAULT_TICKERS))
76
77 # set up plots
78
79 source = ColumnDataSource(data=dict(date=[], t1=[], t2=[], t1_returns=[], t2_returns=[]))
80 source_static = ColumnDataSource(data=dict(date=[], t1=[], t2=[], t1_returns=[], t2_returns=[]))
81 tools = 'pan,wheel_zoom,xbox_select,reset'
82
83 corr = figure(plot_width=350, plot_height=350,
84 tools='pan,wheel_zoom,box_select,reset')
85 corr.circle('t1_returns', 't2_returns', size=2, source=source,
86 selection_color="orange", alpha=0.6, nonselection_alpha=0.1, selection_alpha=0.4)
87
88 ts1 = figure(plot_width=900, plot_height=200, tools=tools, x_axis_type='datetime', active_drag="xbox_select")
89 ts1.line('date', 't1', source=source_static)
90 ts1.circle('date', 't1', size=1, source=source, color=None, selection_color="orange")
91
92 ts2 = figure(plot_width=900, plot_height=200, tools=tools, x_axis_type='datetime', active_drag="xbox_select")
93 ts2.x_range = ts1.x_range
94 ts2.line('date', 't2', source=source_static)
95 ts2.circle('date', 't2', size=1, source=source, color=None, selection_color="orange")
96
97 # set up callbacks
98
99 def ticker1_change(attrname, old, new):
100 ticker2.options = nix(new, DEFAULT_TICKERS)
101 update()
102
103 def ticker2_change(attrname, old, new):
104 ticker1.options = nix(new, DEFAULT_TICKERS)
105 update()
106
107 def update(selected=None):
108 t1, t2 = ticker1.value, ticker2.value
109
110 data = get_data(t1, t2)
111 source.data = source.from_df(data[['t1', 't2', 't1_returns', 't2_returns']])
112 source_static.data = source.data
113
114 update_stats(data, t1, t2)
115
116 corr.title.text = '%s returns vs. %s returns' % (t1, t2)
117 ts1.title.text, ts2.title.text = t1, t2
118
119 def update_stats(data, t1, t2):
120 stats.text = str(data[[t1, t2, t1+'_returns', t2+'_returns']].describe())
121
122 ticker1.on_change('value', ticker1_change)
123 ticker2.on_change('value', ticker2_change)
124
125 def selection_change(attrname, old, new):
126 t1, t2 = ticker1.value, ticker2.value
127 data = get_data(t1, t2)
128 selected = source.selected.indices
129 if selected:
130 data = data.iloc[selected, :]
131 update_stats(data, t1, t2)
132
133 source.on_change('selected', selection_change)
134
135 # set up layout
136 widgets = column(ticker1, ticker2, stats)
137 main_row = row(corr, widgets)
138 series = column(ts1, ts2)
139 layout = column(main_row, series)
140
141 # initialize
142 update()
143
144 curdoc().add_root(layout)
145 curdoc().title = "Stocks"
146
[end of examples/app/stocks/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/app/stocks/main.py b/examples/app/stocks/main.py
--- a/examples/app/stocks/main.py
+++ b/examples/app/stocks/main.py
@@ -130,7 +130,7 @@
data = data.iloc[selected, :]
update_stats(data, t1, t2)
-source.on_change('selected', selection_change)
+source.selected.on_change('indices', selection_change)
# set up layout
widgets = column(ticker1, ticker2, stats)
| {"golden_diff": "diff --git a/examples/app/stocks/main.py b/examples/app/stocks/main.py\n--- a/examples/app/stocks/main.py\n+++ b/examples/app/stocks/main.py\n@@ -130,7 +130,7 @@\n data = data.iloc[selected, :]\n update_stats(data, t1, t2)\n \n-source.on_change('selected', selection_change)\n+source.selected.on_change('indices', selection_change)\n \n # set up layout\n widgets = column(ticker1, ticker2, stats)\n", "issue": "Stocks Example is not working properly \nhttps://github.com/bokeh/bokeh/tree/master/examples/app/stocks\r\n\r\nThe example suppose to change the stats according to the selected points. For some reason \r\n\r\n def selection_change(attrname, old, new):\r\n print('lol')\r\n t1, t2 = ticker1.value, ticker2.value\r\n data = get_data(t1, t2)\r\n selected = source.selected.indices\r\n if selected:\r\n data = data.iloc[selected, :]\r\n update_stats(data, t1, t2)\r\n\r\n source.on_change('selected', selection_change)\r\n\r\nThe code never prints 'lol'. \n", "before_files": [{"content": "''' Create a simple stocks correlation dashboard.\n\nChoose stocks to compare in the drop down widgets, and make selections\non the plots to update the summary and histograms accordingly.\n\n.. note::\n Running this example requires downloading sample data. See\n the included `README`_ for more information.\n\nUse the ``bokeh serve`` command to run the example by executing:\n\n bokeh serve stocks\n\nat your command prompt. Then navigate to the URL\n\n http://localhost:5006/stocks\n\n.. _README: https://github.com/bokeh/bokeh/blob/master/examples/app/stocks/README.md\n\n'''\ntry:\n from functools import lru_cache\nexcept ImportError:\n # Python 2 does stdlib does not have lru_cache so let's just\n # create a dummy decorator to avoid crashing\n print (\"WARNING: Cache for this example is available on Python 3 only.\")\n def lru_cache():\n def dec(f):\n def _(*args, **kws):\n return f(*args, **kws)\n return _\n return dec\n\nfrom os.path import dirname, join\n\nimport pandas as pd\n\nfrom bokeh.io import curdoc\nfrom bokeh.layouts import row, column\nfrom bokeh.models import ColumnDataSource\nfrom bokeh.models.widgets import PreText, Select\nfrom bokeh.plotting import figure\n\nDATA_DIR = join(dirname(__file__), 'daily')\n\nDEFAULT_TICKERS = ['AAPL', 'GOOG', 'INTC', 'BRCM', 'YHOO']\n\ndef nix(val, lst):\n return [x for x in lst if x != val]\n\n@lru_cache()\ndef load_ticker(ticker):\n fname = join(DATA_DIR, 'table_%s.csv' % ticker.lower())\n data = pd.read_csv(fname, header=None, parse_dates=['date'],\n names=['date', 'foo', 'o', 'h', 'l', 'c', 'v'])\n data = data.set_index('date')\n return pd.DataFrame({ticker: data.c, ticker+'_returns': data.c.diff()})\n\n@lru_cache()\ndef get_data(t1, t2):\n df1 = load_ticker(t1)\n df2 = load_ticker(t2)\n data = pd.concat([df1, df2], axis=1)\n data = data.dropna()\n data['t1'] = data[t1]\n data['t2'] = data[t2]\n data['t1_returns'] = data[t1+'_returns']\n data['t2_returns'] = data[t2+'_returns']\n return data\n\n# set up widgets\n\nstats = PreText(text='', width=500)\nticker1 = Select(value='AAPL', options=nix('GOOG', DEFAULT_TICKERS))\nticker2 = Select(value='GOOG', options=nix('AAPL', DEFAULT_TICKERS))\n\n# set up plots\n\nsource = ColumnDataSource(data=dict(date=[], t1=[], t2=[], t1_returns=[], t2_returns=[]))\nsource_static = ColumnDataSource(data=dict(date=[], t1=[], t2=[], t1_returns=[], t2_returns=[]))\ntools = 'pan,wheel_zoom,xbox_select,reset'\n\ncorr = figure(plot_width=350, plot_height=350,\n tools='pan,wheel_zoom,box_select,reset')\ncorr.circle('t1_returns', 't2_returns', size=2, source=source,\n selection_color=\"orange\", alpha=0.6, nonselection_alpha=0.1, selection_alpha=0.4)\n\nts1 = figure(plot_width=900, plot_height=200, tools=tools, x_axis_type='datetime', active_drag=\"xbox_select\")\nts1.line('date', 't1', source=source_static)\nts1.circle('date', 't1', size=1, source=source, color=None, selection_color=\"orange\")\n\nts2 = figure(plot_width=900, plot_height=200, tools=tools, x_axis_type='datetime', active_drag=\"xbox_select\")\nts2.x_range = ts1.x_range\nts2.line('date', 't2', source=source_static)\nts2.circle('date', 't2', size=1, source=source, color=None, selection_color=\"orange\")\n\n# set up callbacks\n\ndef ticker1_change(attrname, old, new):\n ticker2.options = nix(new, DEFAULT_TICKERS)\n update()\n\ndef ticker2_change(attrname, old, new):\n ticker1.options = nix(new, DEFAULT_TICKERS)\n update()\n\ndef update(selected=None):\n t1, t2 = ticker1.value, ticker2.value\n\n data = get_data(t1, t2)\n source.data = source.from_df(data[['t1', 't2', 't1_returns', 't2_returns']])\n source_static.data = source.data\n\n update_stats(data, t1, t2)\n\n corr.title.text = '%s returns vs. %s returns' % (t1, t2)\n ts1.title.text, ts2.title.text = t1, t2\n\ndef update_stats(data, t1, t2):\n stats.text = str(data[[t1, t2, t1+'_returns', t2+'_returns']].describe())\n\nticker1.on_change('value', ticker1_change)\nticker2.on_change('value', ticker2_change)\n\ndef selection_change(attrname, old, new):\n t1, t2 = ticker1.value, ticker2.value\n data = get_data(t1, t2)\n selected = source.selected.indices\n if selected:\n data = data.iloc[selected, :]\n update_stats(data, t1, t2)\n\nsource.on_change('selected', selection_change)\n\n# set up layout\nwidgets = column(ticker1, ticker2, stats)\nmain_row = row(corr, widgets)\nseries = column(ts1, ts2)\nlayout = column(main_row, series)\n\n# initialize\nupdate()\n\ncurdoc().add_root(layout)\ncurdoc().title = \"Stocks\"\n", "path": "examples/app/stocks/main.py"}]} | 2,313 | 108 |
gh_patches_debug_22014 | rasdani/github-patches | git_diff | pytorch__text-361 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MosesTokenizer has been moved out of NLTK due to licensing issues
@jekbradbury great work here!
Due to https://github.com/nltk/nltk/issues/2000, we had to remove MosesTokenizer out of NLTK but now it's hosted on https://github.com/alvations/sacremoses
```
pip install sacremoses
```
The silver lining is that the package comes with the data needed for tokenization so there's no need to keep the `nltk_data` directory =)
----
I would propose adding `sacremoses` on top of `nltk` because NLTK has another port of a nice tokenizer (by @jonsafari) that people overlook, https://github.com/nltk/nltk/blob/develop/nltk/tokenize/toktok.py (I think it's fast too)
</issue>
<code>
[start of torchtext/data/utils.py]
1 import random
2 from contextlib import contextmanager
3 from copy import deepcopy
4
5
6 def get_tokenizer(tokenizer):
7 if callable(tokenizer):
8 return tokenizer
9 if tokenizer == "spacy":
10 try:
11 import spacy
12 spacy_en = spacy.load('en')
13 return lambda s: [tok.text for tok in spacy_en.tokenizer(s)]
14 except ImportError:
15 print("Please install SpaCy and the SpaCy English tokenizer. "
16 "See the docs at https://spacy.io for more information.")
17 raise
18 except AttributeError:
19 print("Please install SpaCy and the SpaCy English tokenizer. "
20 "See the docs at https://spacy.io for more information.")
21 raise
22 elif tokenizer == "moses":
23 try:
24 from nltk.tokenize.moses import MosesTokenizer
25 moses_tokenizer = MosesTokenizer()
26 return moses_tokenizer.tokenize
27 except ImportError:
28 print("Please install NLTK. "
29 "See the docs at http://nltk.org for more information.")
30 raise
31 except LookupError:
32 print("Please install the necessary NLTK corpora. "
33 "See the docs at http://nltk.org for more information.")
34 raise
35 elif tokenizer == 'revtok':
36 try:
37 import revtok
38 return revtok.tokenize
39 except ImportError:
40 print("Please install revtok.")
41 raise
42 elif tokenizer == 'subword':
43 try:
44 import revtok
45 return lambda x: revtok.tokenize(x, decap=True)
46 except ImportError:
47 print("Please install revtok.")
48 raise
49 raise ValueError("Requested tokenizer {}, valid choices are a "
50 "callable that takes a single string as input, "
51 "\"revtok\" for the revtok reversible tokenizer, "
52 "\"subword\" for the revtok caps-aware tokenizer, "
53 "\"spacy\" for the SpaCy English tokenizer, or "
54 "\"moses\" for the NLTK port of the Moses tokenization "
55 "script.".format(tokenizer))
56
57
58 def interleave_keys(a, b):
59 """Interleave bits from two sort keys to form a joint sort key.
60
61 Examples that are similar in both of the provided keys will have similar
62 values for the key defined by this function. Useful for tasks with two
63 text fields like machine translation or natural language inference.
64 """
65 def interleave(args):
66 return ''.join([x for t in zip(*args) for x in t])
67 return int(''.join(interleave(format(x, '016b') for x in (a, b))), base=2)
68
69
70 def get_torch_version():
71 import torch
72 v = torch.__version__
73 version_substrings = v.split('.')
74 major, minor = version_substrings[0], version_substrings[1]
75 return int(major), int(minor)
76
77
78 class RandomShuffler(object):
79 """Use random functions while keeping track of the random state to make it
80 reproducible and deterministic."""
81
82 def __init__(self, random_state=None):
83 self._random_state = random_state
84 if self._random_state is None:
85 self._random_state = random.getstate()
86
87 @contextmanager
88 def use_internal_state(self):
89 """Use a specific RNG state."""
90 old_state = random.getstate()
91 random.setstate(self._random_state)
92 yield
93 self._random_state = random.getstate()
94 random.setstate(old_state)
95
96 @property
97 def random_state(self):
98 return deepcopy(self._random_state)
99
100 @random_state.setter
101 def random_state(self, s):
102 self._random_state = s
103
104 def __call__(self, data):
105 """Shuffle and return a new list."""
106 with self.use_internal_state():
107 return random.sample(data, len(data))
108
[end of torchtext/data/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchtext/data/utils.py b/torchtext/data/utils.py
--- a/torchtext/data/utils.py
+++ b/torchtext/data/utils.py
@@ -21,16 +21,22 @@
raise
elif tokenizer == "moses":
try:
- from nltk.tokenize.moses import MosesTokenizer
+ from sacremoses import MosesTokenizer
moses_tokenizer = MosesTokenizer()
return moses_tokenizer.tokenize
except ImportError:
- print("Please install NLTK. "
- "See the docs at http://nltk.org for more information.")
+ print("Please install SacreMoses. "
+ "See the docs at https://github.com/alvations/sacremoses "
+ "for more information.")
raise
- except LookupError:
- print("Please install the necessary NLTK corpora. "
- "See the docs at http://nltk.org for more information.")
+ elif tokenizer == "toktok":
+ try:
+ from nltk.tokenize.toktok import ToktokTokenizer
+ toktok = ToktokTokenizer()
+ return toktok.tokenize
+ except ImportError:
+ print("Please install NLTK. "
+ "See the docs at https://nltk.org for more information.")
raise
elif tokenizer == 'revtok':
try:
| {"golden_diff": "diff --git a/torchtext/data/utils.py b/torchtext/data/utils.py\n--- a/torchtext/data/utils.py\n+++ b/torchtext/data/utils.py\n@@ -21,16 +21,22 @@\n raise\n elif tokenizer == \"moses\":\n try:\n- from nltk.tokenize.moses import MosesTokenizer\n+ from sacremoses import MosesTokenizer\n moses_tokenizer = MosesTokenizer()\n return moses_tokenizer.tokenize\n except ImportError:\n- print(\"Please install NLTK. \"\n- \"See the docs at http://nltk.org for more information.\")\n+ print(\"Please install SacreMoses. \"\n+ \"See the docs at https://github.com/alvations/sacremoses \"\n+ \"for more information.\")\n raise\n- except LookupError:\n- print(\"Please install the necessary NLTK corpora. \"\n- \"See the docs at http://nltk.org for more information.\")\n+ elif tokenizer == \"toktok\":\n+ try:\n+ from nltk.tokenize.toktok import ToktokTokenizer\n+ toktok = ToktokTokenizer()\n+ return toktok.tokenize\n+ except ImportError:\n+ print(\"Please install NLTK. \"\n+ \"See the docs at https://nltk.org for more information.\")\n raise\n elif tokenizer == 'revtok':\n try:\n", "issue": "MosesTokenizer has been moved out of NLTK due to licensing issues\n@jekbradbury great work here!\r\n\r\nDue to https://github.com/nltk/nltk/issues/2000, we had to remove MosesTokenizer out of NLTK but now it's hosted on https://github.com/alvations/sacremoses \r\n\r\n```\r\npip install sacremoses\r\n```\r\n\r\nThe silver lining is that the package comes with the data needed for tokenization so there's no need to keep the `nltk_data` directory =)\r\n\r\n----\r\n\r\nI would propose adding `sacremoses` on top of `nltk` because NLTK has another port of a nice tokenizer (by @jonsafari) that people overlook, https://github.com/nltk/nltk/blob/develop/nltk/tokenize/toktok.py (I think it's fast too)\n", "before_files": [{"content": "import random\nfrom contextlib import contextmanager\nfrom copy import deepcopy\n\n\ndef get_tokenizer(tokenizer):\n if callable(tokenizer):\n return tokenizer\n if tokenizer == \"spacy\":\n try:\n import spacy\n spacy_en = spacy.load('en')\n return lambda s: [tok.text for tok in spacy_en.tokenizer(s)]\n except ImportError:\n print(\"Please install SpaCy and the SpaCy English tokenizer. \"\n \"See the docs at https://spacy.io for more information.\")\n raise\n except AttributeError:\n print(\"Please install SpaCy and the SpaCy English tokenizer. \"\n \"See the docs at https://spacy.io for more information.\")\n raise\n elif tokenizer == \"moses\":\n try:\n from nltk.tokenize.moses import MosesTokenizer\n moses_tokenizer = MosesTokenizer()\n return moses_tokenizer.tokenize\n except ImportError:\n print(\"Please install NLTK. \"\n \"See the docs at http://nltk.org for more information.\")\n raise\n except LookupError:\n print(\"Please install the necessary NLTK corpora. \"\n \"See the docs at http://nltk.org for more information.\")\n raise\n elif tokenizer == 'revtok':\n try:\n import revtok\n return revtok.tokenize\n except ImportError:\n print(\"Please install revtok.\")\n raise\n elif tokenizer == 'subword':\n try:\n import revtok\n return lambda x: revtok.tokenize(x, decap=True)\n except ImportError:\n print(\"Please install revtok.\")\n raise\n raise ValueError(\"Requested tokenizer {}, valid choices are a \"\n \"callable that takes a single string as input, \"\n \"\\\"revtok\\\" for the revtok reversible tokenizer, \"\n \"\\\"subword\\\" for the revtok caps-aware tokenizer, \"\n \"\\\"spacy\\\" for the SpaCy English tokenizer, or \"\n \"\\\"moses\\\" for the NLTK port of the Moses tokenization \"\n \"script.\".format(tokenizer))\n\n\ndef interleave_keys(a, b):\n \"\"\"Interleave bits from two sort keys to form a joint sort key.\n\n Examples that are similar in both of the provided keys will have similar\n values for the key defined by this function. Useful for tasks with two\n text fields like machine translation or natural language inference.\n \"\"\"\n def interleave(args):\n return ''.join([x for t in zip(*args) for x in t])\n return int(''.join(interleave(format(x, '016b') for x in (a, b))), base=2)\n\n\ndef get_torch_version():\n import torch\n v = torch.__version__\n version_substrings = v.split('.')\n major, minor = version_substrings[0], version_substrings[1]\n return int(major), int(minor)\n\n\nclass RandomShuffler(object):\n \"\"\"Use random functions while keeping track of the random state to make it\n reproducible and deterministic.\"\"\"\n\n def __init__(self, random_state=None):\n self._random_state = random_state\n if self._random_state is None:\n self._random_state = random.getstate()\n\n @contextmanager\n def use_internal_state(self):\n \"\"\"Use a specific RNG state.\"\"\"\n old_state = random.getstate()\n random.setstate(self._random_state)\n yield\n self._random_state = random.getstate()\n random.setstate(old_state)\n\n @property\n def random_state(self):\n return deepcopy(self._random_state)\n\n @random_state.setter\n def random_state(self, s):\n self._random_state = s\n\n def __call__(self, data):\n \"\"\"Shuffle and return a new list.\"\"\"\n with self.use_internal_state():\n return random.sample(data, len(data))\n", "path": "torchtext/data/utils.py"}]} | 1,744 | 299 |
gh_patches_debug_43415 | rasdani/github-patches | git_diff | kserve__kserve-2817 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Knative KafaSource detects wrong URL to serve events
/kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
- Deployed knative-eventing and kafkasource.
- Added https://github.com/kserve/kserve/blob/master/docs/samples/kafka/addressable-resolver.yaml
- knative eventing is able to read the kafka source.
- The service name created by inferenceservice is \<isvc-name>-predictor-default. However the kafkasource sends the events to http://\<isvc-name>.\<namespace>.svc.cluster.local
**What did you expect to happen:**
- I expected the requests to be sent to http://\<isvc-name>-predictor-default.\<namespace>.svc.cluster.local
**What's the InferenceService yaml:**
```
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "<inference-name>"
namespace: "\<namespace>"
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8082'
labels:
name: "\<label>"
spec:
predictor:
minReplicas: 3
maxReplicas: 100
pytorch:
name: \<name>
storageUri: gs://<storage>
resources:
limits:
cpu: 3000m
memory: 3Gi
requests:
cpu: 2000m
memory: 3Gi
```
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
```
ingress: |-
{
"ingressGateway" : "knative-serving/knative-ingress-gateway",
"ingressService" : "istio-ingressgateway.istio-system.svc.cluster.local",
"localGateway" : "knative-serving/knative-local-gateway",
"localGatewayService" : "knative-local-gateway.istio-system.svc.cluster.local",
"ingressDomain" : "example.com",
"ingressClassName" : "kong",
"domainTemplate": "{{ .Name }}-{{ .Namespace }}.{{ .IngressDomain }}",
"urlScheme": "http"
}
```
**Environment:**
Using Kong
RawDeployment
Torchserve version : 0.6.1
- Istio Version: Istio not installed
- Knative Version: knative serving not installled. knative eventing v1.8.2
- KServe Version: 0.9.0
- Kubeflow version: N/A
- Cloud Environment: GKE
- Minikube/Kind version:
- Kubernetes version: (use `kubectl version`): 1.23
- OS (e.g. from `/etc/os-release`): GKE
</issue>
<code>
[start of docs/samples/kafka/image_transformer/image_transformer.py]
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import kserve
15 from typing import Dict, Union
16 import logging
17 import boto3
18 import cv2
19 from cloudevents.http import CloudEvent
20
21 from kserve import InferRequest, InferResponse
22 from kserve.protocol.grpc.grpc_predict_v2_pb2 import ModelInferResponse
23
24 logging.basicConfig(level=kserve.constants.KSERVE_LOGLEVEL)
25
26 session = boto3.Session()
27 client = session.client('s3', endpoint_url='http://minio-service:9000', aws_access_key_id='minio',
28 aws_secret_access_key='minio123')
29
30
31 def image_transform(image):
32 img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)
33 g = cv2.resize(255 - img, (28, 28))
34 g = g.flatten() / 255.0
35 return g.tolist()
36
37
38 class ImageTransformer(kserve.Model):
39 def __init__(self, name: str, predictor_host: str):
40 super().__init__(name)
41 self.predictor_host = predictor_host
42 self._key = None
43
44 def preprocess(self, inputs: Union[Dict, CloudEvent, InferRequest],
45 headers: Dict[str, str] = None) -> Union[Dict, InferRequest]:
46 if inputs['EventName'] == 's3:ObjectCreated:Put':
47 bucket = inputs['Records'][0]['s3']['bucket']['name']
48 key = inputs['Records'][0]['s3']['object']['key']
49 self._key = key
50 client.download_file(bucket, key, '/tmp/' + key)
51 request = image_transform('/tmp/' + key)
52 return {"instances": [request]}
53 raise Exception("unknown event")
54
55 def postprocess(self, response: Union[Dict, InferResponse, ModelInferResponse], headers: Dict[str, str] = None) \
56 -> Union[Dict, ModelInferResponse]:
57 logging.info(response)
58 index = response["predictions"][0]["classes"]
59 logging.info("digit:" + str(index))
60 client.upload_file('/tmp/' + self._key, 'digit-' + str(index), self._key)
61 return response
62
[end of docs/samples/kafka/image_transformer/image_transformer.py]
[start of docs/samples/kafka/setup.py]
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13 import os
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22
23 with open(os.path.join(os.getcwd(), '../../../python/VERSION')) as version_file:
24 version = version_file.read().strip()
25
26 setup(
27 name='transformer',
28 version='0.1.0',
29 author_email='[email protected]',
30 license='../../LICENSE.txt',
31 url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',
32 description='Transformer',
33 long_description=open('README.md').read(),
34 python_requires='>=3.7',
35 packages=find_packages("transformer"),
36 install_requires=[
37 f"kserve>={version}",
38 "pandas>=0.24.2",
39 "opencv-python-headless==4.2.0.32",
40 ],
41 tests_require=tests_require,
42 extras_require={'test': tests_require}
43 )
44
[end of docs/samples/kafka/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/samples/kafka/image_transformer/image_transformer.py b/docs/samples/kafka/image_transformer/image_transformer.py
--- a/docs/samples/kafka/image_transformer/image_transformer.py
+++ b/docs/samples/kafka/image_transformer/image_transformer.py
@@ -11,13 +11,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import kserve
-from typing import Dict, Union
import logging
+from typing import Dict, Union
+
import boto3
import cv2
from cloudevents.http import CloudEvent
+import kserve
from kserve import InferRequest, InferResponse
from kserve.protocol.grpc.grpc_predict_v2_pb2 import ModelInferResponse
@@ -26,6 +27,7 @@
session = boto3.Session()
client = session.client('s3', endpoint_url='http://minio-service:9000', aws_access_key_id='minio',
aws_secret_access_key='minio123')
+digits_bucket = 'digits'
def image_transform(image):
@@ -41,8 +43,9 @@
self.predictor_host = predictor_host
self._key = None
- def preprocess(self, inputs: Union[Dict, CloudEvent, InferRequest],
- headers: Dict[str, str] = None) -> Union[Dict, InferRequest]:
+ async def preprocess(self, inputs: Union[Dict, CloudEvent, InferRequest],
+ headers: Dict[str, str] = None) -> Union[Dict, InferRequest]:
+ logging.info("Received inputs %s", inputs)
if inputs['EventName'] == 's3:ObjectCreated:Put':
bucket = inputs['Records'][0]['s3']['bucket']['name']
key = inputs['Records'][0]['s3']['object']['key']
@@ -54,8 +57,10 @@
def postprocess(self, response: Union[Dict, InferResponse, ModelInferResponse], headers: Dict[str, str] = None) \
-> Union[Dict, ModelInferResponse]:
- logging.info(response)
+ logging.info("response: %s", response)
index = response["predictions"][0]["classes"]
logging.info("digit:" + str(index))
- client.upload_file('/tmp/' + self._key, 'digit-' + str(index), self._key)
+ upload_path = f'digit-{index}/{self._key}'
+ client.upload_file('/tmp/' + self._key, digits_bucket, upload_path)
+ logging.info(f"Image {self._key} successfully uploaded to {upload_path}")
return response
diff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py
--- a/docs/samples/kafka/setup.py
+++ b/docs/samples/kafka/setup.py
@@ -10,22 +10,17 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-import os
from setuptools import setup, find_packages
tests_require = [
'pytest',
- 'pytest-tornasync',
'mypy'
]
-with open(os.path.join(os.getcwd(), '../../../python/VERSION')) as version_file:
- version = version_file.read().strip()
-
setup(
name='transformer',
- version='0.1.0',
+ version='0.2.0',
author_email='[email protected]',
license='../../LICENSE.txt',
url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',
@@ -34,9 +29,9 @@
python_requires='>=3.7',
packages=find_packages("transformer"),
install_requires=[
- f"kserve>={version}",
+ "kserve>0.10.0",
"pandas>=0.24.2",
- "opencv-python-headless==4.2.0.32",
+ "opencv-python-headless==4.7.0.72",
],
tests_require=tests_require,
extras_require={'test': tests_require}
| {"golden_diff": "diff --git a/docs/samples/kafka/image_transformer/image_transformer.py b/docs/samples/kafka/image_transformer/image_transformer.py\n--- a/docs/samples/kafka/image_transformer/image_transformer.py\n+++ b/docs/samples/kafka/image_transformer/image_transformer.py\n@@ -11,13 +11,14 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import kserve\n-from typing import Dict, Union\n import logging\n+from typing import Dict, Union\n+\n import boto3\n import cv2\n from cloudevents.http import CloudEvent\n \n+import kserve\n from kserve import InferRequest, InferResponse\n from kserve.protocol.grpc.grpc_predict_v2_pb2 import ModelInferResponse\n \n@@ -26,6 +27,7 @@\n session = boto3.Session()\n client = session.client('s3', endpoint_url='http://minio-service:9000', aws_access_key_id='minio',\n aws_secret_access_key='minio123')\n+digits_bucket = 'digits'\n \n \n def image_transform(image):\n@@ -41,8 +43,9 @@\n self.predictor_host = predictor_host\n self._key = None\n \n- def preprocess(self, inputs: Union[Dict, CloudEvent, InferRequest],\n- headers: Dict[str, str] = None) -> Union[Dict, InferRequest]:\n+ async def preprocess(self, inputs: Union[Dict, CloudEvent, InferRequest],\n+ headers: Dict[str, str] = None) -> Union[Dict, InferRequest]:\n+ logging.info(\"Received inputs %s\", inputs)\n if inputs['EventName'] == 's3:ObjectCreated:Put':\n bucket = inputs['Records'][0]['s3']['bucket']['name']\n key = inputs['Records'][0]['s3']['object']['key']\n@@ -54,8 +57,10 @@\n \n def postprocess(self, response: Union[Dict, InferResponse, ModelInferResponse], headers: Dict[str, str] = None) \\\n -> Union[Dict, ModelInferResponse]:\n- logging.info(response)\n+ logging.info(\"response: %s\", response)\n index = response[\"predictions\"][0][\"classes\"]\n logging.info(\"digit:\" + str(index))\n- client.upload_file('/tmp/' + self._key, 'digit-' + str(index), self._key)\n+ upload_path = f'digit-{index}/{self._key}'\n+ client.upload_file('/tmp/' + self._key, digits_bucket, upload_path)\n+ logging.info(f\"Image {self._key} successfully uploaded to {upload_path}\")\n return response\ndiff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py\n--- a/docs/samples/kafka/setup.py\n+++ b/docs/samples/kafka/setup.py\n@@ -10,22 +10,17 @@\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n # See the License for the specific language governing permissions and\n # limitations under the License.\n-import os\n \n from setuptools import setup, find_packages\n \n tests_require = [\n 'pytest',\n- 'pytest-tornasync',\n 'mypy'\n ]\n \n-with open(os.path.join(os.getcwd(), '../../../python/VERSION')) as version_file:\n- version = version_file.read().strip()\n-\n setup(\n name='transformer',\n- version='0.1.0',\n+ version='0.2.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',\n@@ -34,9 +29,9 @@\n python_requires='>=3.7',\n packages=find_packages(\"transformer\"),\n install_requires=[\n- f\"kserve>={version}\",\n+ \"kserve>0.10.0\",\n \"pandas>=0.24.2\",\n- \"opencv-python-headless==4.2.0.32\",\n+ \"opencv-python-headless==4.7.0.72\",\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n", "issue": "Knative KafaSource detects wrong URL to serve events\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\n[A clear and concise description of what the bug is.]\r\n- Deployed knative-eventing and kafkasource.\r\n- Added https://github.com/kserve/kserve/blob/master/docs/samples/kafka/addressable-resolver.yaml\r\n- knative eventing is able to read the kafka source.\r\n- The service name created by inferenceservice is \\<isvc-name>-predictor-default. However the kafkasource sends the events to http://\\<isvc-name>.\\<namespace>.svc.cluster.local\r\n\r\n**What did you expect to happen:**\r\n- I expected the requests to be sent to http://\\<isvc-name>-predictor-default.\\<namespace>.svc.cluster.local\r\n\r\n**What's the InferenceService yaml:**\r\n```\r\napiVersion: \"serving.kserve.io/v1beta1\"\r\nkind: \"InferenceService\"\r\nmetadata:\r\n name: \"<inference-name>\"\r\n namespace: \"\\<namespace>\"\r\n annotations:\r\n prometheus.io/scrape: 'true'\r\n prometheus.io/port: '8082'\r\n labels:\r\n name: \"\\<label>\"\r\nspec:\r\n predictor:\r\n minReplicas: 3\r\n maxReplicas: 100\r\n pytorch:\r\n name: \\<name>\r\n storageUri: gs://<storage>\r\n resources:\r\n limits:\r\n cpu: 3000m\r\n memory: 3Gi\r\n requests:\r\n cpu: 2000m\r\n memory: 3Gi\r\n```\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n```\r\ningress: |-\r\n {\r\n \"ingressGateway\" : \"knative-serving/knative-ingress-gateway\",\r\n \"ingressService\" : \"istio-ingressgateway.istio-system.svc.cluster.local\",\r\n \"localGateway\" : \"knative-serving/knative-local-gateway\",\r\n \"localGatewayService\" : \"knative-local-gateway.istio-system.svc.cluster.local\",\r\n \"ingressDomain\" : \"example.com\",\r\n \"ingressClassName\" : \"kong\",\r\n \"domainTemplate\": \"{{ .Name }}-{{ .Namespace }}.{{ .IngressDomain }}\",\r\n \"urlScheme\": \"http\"\r\n }\r\n```\r\n\r\n**Environment:**\r\n Using Kong\r\n RawDeployment\r\nTorchserve version : 0.6.1\r\n- Istio Version: Istio not installed\r\n- Knative Version: knative serving not installled. knative eventing v1.8.2\r\n- KServe Version: 0.9.0\r\n- Kubeflow version: N/A\r\n- Cloud Environment: GKE\r\n- Minikube/Kind version:\r\n- Kubernetes version: (use `kubectl version`): 1.23\r\n- OS (e.g. from `/etc/os-release`): GKE\r\n\n", "before_files": [{"content": "#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport kserve\nfrom typing import Dict, Union\nimport logging\nimport boto3\nimport cv2\nfrom cloudevents.http import CloudEvent\n\nfrom kserve import InferRequest, InferResponse\nfrom kserve.protocol.grpc.grpc_predict_v2_pb2 import ModelInferResponse\n\nlogging.basicConfig(level=kserve.constants.KSERVE_LOGLEVEL)\n\nsession = boto3.Session()\nclient = session.client('s3', endpoint_url='http://minio-service:9000', aws_access_key_id='minio',\n aws_secret_access_key='minio123')\n\n\ndef image_transform(image):\n img = cv2.imread(image, cv2.IMREAD_GRAYSCALE)\n g = cv2.resize(255 - img, (28, 28))\n g = g.flatten() / 255.0\n return g.tolist()\n\n\nclass ImageTransformer(kserve.Model):\n def __init__(self, name: str, predictor_host: str):\n super().__init__(name)\n self.predictor_host = predictor_host\n self._key = None\n\n def preprocess(self, inputs: Union[Dict, CloudEvent, InferRequest],\n headers: Dict[str, str] = None) -> Union[Dict, InferRequest]:\n if inputs['EventName'] == 's3:ObjectCreated:Put':\n bucket = inputs['Records'][0]['s3']['bucket']['name']\n key = inputs['Records'][0]['s3']['object']['key']\n self._key = key\n client.download_file(bucket, key, '/tmp/' + key)\n request = image_transform('/tmp/' + key)\n return {\"instances\": [request]}\n raise Exception(\"unknown event\")\n\n def postprocess(self, response: Union[Dict, InferResponse, ModelInferResponse], headers: Dict[str, str] = None) \\\n -> Union[Dict, ModelInferResponse]:\n logging.info(response)\n index = response[\"predictions\"][0][\"classes\"]\n logging.info(\"digit:\" + str(index))\n client.upload_file('/tmp/' + self._key, 'digit-' + str(index), self._key)\n return response\n", "path": "docs/samples/kafka/image_transformer/image_transformer.py"}, {"content": "#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open(os.path.join(os.getcwd(), '../../../python/VERSION')) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name='transformer',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',\n description='Transformer',\n long_description=open('README.md').read(),\n python_requires='>=3.7',\n packages=find_packages(\"transformer\"),\n install_requires=[\n f\"kserve>={version}\",\n \"pandas>=0.24.2\",\n \"opencv-python-headless==4.2.0.32\",\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "docs/samples/kafka/setup.py"}]} | 2,303 | 914 |
gh_patches_debug_9600 | rasdani/github-patches | git_diff | ansible__ansible-17457 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ansible fails to create retry files with [Errno 2] No such file or directory: ''
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
retry files
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
none
##### OS / ENVIRONMENT
Ubuntu 16.04
##### SUMMARY
When a playbook fails, Ansible tries to create a retry file and then fails.
##### STEPS TO REPRODUCE
```
# test.yml
---
- hosts: localhost
gather_facts: no
tasks:
- command: "false"
```
Run `ansible-playbook -i localhost, -c local test.yml`
##### EXPECTED RESULTS
Playbook fails, ansible doesn't complain about failing to create `test.retry`.
##### ACTUAL RESULTS
```
PLAY [localhost] ***************************************************************
TASK [command] *****************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["false"], "delta": "0:00:00.001666", "end": "2016-09-08 11:42:55.135782", "failed": true, "rc": 1, "start": "2016-09-08 11:42:55.134116", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
NO MORE HOSTS LEFT *************************************************************
[WARNING]: Could not create retry file 'test.retry'. [Errno 2] No such file or directory: ''
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
</issue>
<code>
[start of lib/ansible/executor/playbook_executor.py]
1 # (c) 2012-2014, Michael DeHaan <[email protected]>
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17
18 # Make coding more python3-ish
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 import os
23
24 from ansible import constants as C
25 from ansible.executor.task_queue_manager import TaskQueueManager
26 from ansible.module_utils._text import to_native, to_text
27 from ansible.playbook import Playbook
28 from ansible.template import Templar
29 from ansible.utils.helpers import pct_to_int
30 from ansible.utils.path import makedirs_safe
31
32 try:
33 from __main__ import display
34 except ImportError:
35 from ansible.utils.display import Display
36 display = Display()
37
38
39 class PlaybookExecutor:
40
41 '''
42 This is the primary class for executing playbooks, and thus the
43 basis for bin/ansible-playbook operation.
44 '''
45
46 def __init__(self, playbooks, inventory, variable_manager, loader, options, passwords):
47 self._playbooks = playbooks
48 self._inventory = inventory
49 self._variable_manager = variable_manager
50 self._loader = loader
51 self._options = options
52 self.passwords = passwords
53 self._unreachable_hosts = dict()
54
55 if options.listhosts or options.listtasks or options.listtags or options.syntax:
56 self._tqm = None
57 else:
58 self._tqm = TaskQueueManager(inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords=self.passwords)
59
60 def run(self):
61
62 '''
63 Run the given playbook, based on the settings in the play which
64 may limit the runs to serialized groups, etc.
65 '''
66
67 result = 0
68 entrylist = []
69 entry = {}
70 try:
71 for playbook_path in self._playbooks:
72 pb = Playbook.load(playbook_path, variable_manager=self._variable_manager, loader=self._loader)
73 self._inventory.set_playbook_basedir(os.path.realpath(os.path.dirname(playbook_path)))
74
75 if self._tqm is None: # we are doing a listing
76 entry = {'playbook': playbook_path}
77 entry['plays'] = []
78 else:
79 # make sure the tqm has callbacks loaded
80 self._tqm.load_callbacks()
81 self._tqm.send_callback('v2_playbook_on_start', pb)
82
83 i = 1
84 plays = pb.get_plays()
85 display.vv(u'%d plays in %s' % (len(plays), to_text(playbook_path)))
86
87 for play in plays:
88 if play._included_path is not None:
89 self._loader.set_basedir(play._included_path)
90 else:
91 self._loader.set_basedir(pb._basedir)
92
93 # clear any filters which may have been applied to the inventory
94 self._inventory.remove_restriction()
95
96 if play.vars_prompt:
97 for var in play.vars_prompt:
98 vname = var['name']
99 prompt = var.get("prompt", vname)
100 default = var.get("default", None)
101 private = var.get("private", True)
102 confirm = var.get("confirm", False)
103 encrypt = var.get("encrypt", None)
104 salt_size = var.get("salt_size", None)
105 salt = var.get("salt", None)
106
107 if vname not in self._variable_manager.extra_vars:
108 if self._tqm:
109 self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default)
110 play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default)
111 else: # we are either in --list-<option> or syntax check
112 play.vars[vname] = default
113
114 # Create a temporary copy of the play here, so we can run post_validate
115 # on it without the templating changes affecting the original object.
116 all_vars = self._variable_manager.get_vars(loader=self._loader, play=play)
117 templar = Templar(loader=self._loader, variables=all_vars)
118 new_play = play.copy()
119 new_play.post_validate(templar)
120
121 if self._options.syntax:
122 continue
123
124 if self._tqm is None:
125 # we are just doing a listing
126 entry['plays'].append(new_play)
127
128 else:
129 self._tqm._unreachable_hosts.update(self._unreachable_hosts)
130
131 previously_failed = len(self._tqm._failed_hosts)
132 previously_unreachable = len(self._tqm._unreachable_hosts)
133
134 break_play = False
135 # we are actually running plays
136 for batch in self._get_serialized_batches(new_play):
137 if len(batch) == 0:
138 self._tqm.send_callback('v2_playbook_on_play_start', new_play)
139 self._tqm.send_callback('v2_playbook_on_no_hosts_matched')
140 break
141
142 # restrict the inventory to the hosts in the serialized batch
143 self._inventory.restrict_to_hosts(batch)
144 # and run it...
145 result = self._tqm.run(play=play)
146
147 # break the play if the result equals the special return code
148 if result & self._tqm.RUN_FAILED_BREAK_PLAY != 0:
149 result = self._tqm.RUN_FAILED_HOSTS
150 break_play = True
151
152 # check the number of failures here, to see if they're above the maximum
153 # failure percentage allowed, or if any errors are fatal. If either of those
154 # conditions are met, we break out, otherwise we only break out if the entire
155 # batch failed
156 failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) - \
157 (previously_failed + previously_unreachable)
158
159 if len(batch) == failed_hosts_count:
160 break_play = True
161 break
162
163 # update the previous counts so they don't accumulate incorrectly
164 # over multiple serial batches
165 previously_failed += len(self._tqm._failed_hosts) - previously_failed
166 previously_unreachable += len(self._tqm._unreachable_hosts) - previously_unreachable
167
168 # save the unreachable hosts from this batch
169 self._unreachable_hosts.update(self._tqm._unreachable_hosts)
170
171 if break_play:
172 break
173
174 i = i + 1 # per play
175
176 if entry:
177 entrylist.append(entry) # per playbook
178
179 # send the stats callback for this playbook
180 if self._tqm is not None:
181 if C.RETRY_FILES_ENABLED:
182 retries = set(self._tqm._failed_hosts.keys())
183 retries.update(self._tqm._unreachable_hosts.keys())
184 retries = sorted(retries)
185 if len(retries) > 0:
186 if C.RETRY_FILES_SAVE_PATH:
187 basedir = C.shell_expand(C.RETRY_FILES_SAVE_PATH)
188 elif playbook_path:
189 basedir = os.path.dirname(playbook_path)
190 else:
191 basedir = '~/'
192
193 (retry_name, _) = os.path.splitext(os.path.basename(playbook_path))
194 filename = os.path.join(basedir, "%s.retry" % retry_name)
195 if self._generate_retry_inventory(filename, retries):
196 display.display("\tto retry, use: --limit @%s\n" % filename)
197
198 self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats)
199
200 # if the last result wasn't zero, break out of the playbook file name loop
201 if result != 0:
202 break
203
204 if entrylist:
205 return entrylist
206
207 finally:
208 if self._tqm is not None:
209 self._tqm.cleanup()
210 if self._loader:
211 self._loader.cleanup_all_tmp_files()
212
213 if self._options.syntax:
214 display.display("No issues encountered")
215 return result
216
217 return result
218
219 def _get_serialized_batches(self, play):
220 '''
221 Returns a list of hosts, subdivided into batches based on
222 the serial size specified in the play.
223 '''
224
225 # make sure we have a unique list of hosts
226 all_hosts = self._inventory.get_hosts(play.hosts)
227 all_hosts_len = len(all_hosts)
228
229 # the serial value can be listed as a scalar or a list of
230 # scalars, so we make sure it's a list here
231 serial_batch_list = play.serial
232 if len(serial_batch_list) == 0:
233 serial_batch_list = [-1]
234
235 cur_item = 0
236 serialized_batches = []
237
238 while len(all_hosts) > 0:
239 # get the serial value from current item in the list
240 serial = pct_to_int(serial_batch_list[cur_item], all_hosts_len)
241
242 # if the serial count was not specified or is invalid, default to
243 # a list of all hosts, otherwise grab a chunk of the hosts equal
244 # to the current serial item size
245 if serial <= 0:
246 serialized_batches.append(all_hosts)
247 break
248 else:
249 play_hosts = []
250 for x in range(serial):
251 if len(all_hosts) > 0:
252 play_hosts.append(all_hosts.pop(0))
253
254 serialized_batches.append(play_hosts)
255
256 # increment the current batch list item number, and if we've hit
257 # the end keep using the last element until we've consumed all of
258 # the hosts in the inventory
259 cur_item += 1
260 if cur_item > len(serial_batch_list) - 1:
261 cur_item = len(serial_batch_list) - 1
262
263 return serialized_batches
264
265 def _generate_retry_inventory(self, retry_path, replay_hosts):
266 '''
267 Called when a playbook run fails. It generates an inventory which allows
268 re-running on ONLY the failed hosts. This may duplicate some variable
269 information in group_vars/host_vars but that is ok, and expected.
270 '''
271 try:
272 makedirs_safe(os.path.dirname(retry_path))
273 with open(retry_path, 'w') as fd:
274 for x in replay_hosts:
275 fd.write("%s\n" % x)
276 except Exception as e:
277 display.warning("Could not create retry file '%s'.\n\t%s" % (retry_path, to_native(e)))
278 return False
279
280 return True
281
[end of lib/ansible/executor/playbook_executor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/executor/playbook_executor.py b/lib/ansible/executor/playbook_executor.py
--- a/lib/ansible/executor/playbook_executor.py
+++ b/lib/ansible/executor/playbook_executor.py
@@ -186,7 +186,7 @@
if C.RETRY_FILES_SAVE_PATH:
basedir = C.shell_expand(C.RETRY_FILES_SAVE_PATH)
elif playbook_path:
- basedir = os.path.dirname(playbook_path)
+ basedir = os.path.dirname(os.path.abspath(playbook_path))
else:
basedir = '~/'
| {"golden_diff": "diff --git a/lib/ansible/executor/playbook_executor.py b/lib/ansible/executor/playbook_executor.py\n--- a/lib/ansible/executor/playbook_executor.py\n+++ b/lib/ansible/executor/playbook_executor.py\n@@ -186,7 +186,7 @@\n if C.RETRY_FILES_SAVE_PATH:\n basedir = C.shell_expand(C.RETRY_FILES_SAVE_PATH)\n elif playbook_path:\n- basedir = os.path.dirname(playbook_path)\n+ basedir = os.path.dirname(os.path.abspath(playbook_path))\n else:\n basedir = '~/'\n", "issue": "Ansible fails to create retry files with [Errno 2] No such file or directory: ''\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\nretry files\n##### ANSIBLE VERSION\n\n```\nansible 2.1.1.0\n config file = \n configured module search path = Default w/o overrides\n```\n##### CONFIGURATION\n\nnone\n##### OS / ENVIRONMENT\n\nUbuntu 16.04\n##### SUMMARY\n\nWhen a playbook fails, Ansible tries to create a retry file and then fails.\n##### STEPS TO REPRODUCE\n\n```\n# test.yml\n\n---\n- hosts: localhost\n gather_facts: no\n tasks:\n - command: \"false\"\n```\n\nRun `ansible-playbook -i localhost, -c local test.yml`\n##### EXPECTED RESULTS\n\nPlaybook fails, ansible doesn't complain about failing to create `test.retry`.\n##### ACTUAL RESULTS\n\n```\nPLAY [localhost] ***************************************************************\n\nTASK [command] *****************************************************************\nfatal: [localhost]: FAILED! => {\"changed\": true, \"cmd\": [\"false\"], \"delta\": \"0:00:00.001666\", \"end\": \"2016-09-08 11:42:55.135782\", \"failed\": true, \"rc\": 1, \"start\": \"2016-09-08 11:42:55.134116\", \"stderr\": \"\", \"stdout\": \"\", \"stdout_lines\": [], \"warnings\": []}\n\nNO MORE HOSTS LEFT *************************************************************\n [WARNING]: Could not create retry file 'test.retry'. [Errno 2] No such file or directory: ''\n\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=0 changed=0 unreachable=0 failed=1 \n```\n\n", "before_files": [{"content": "# (c) 2012-2014, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport os\n\nfrom ansible import constants as C\nfrom ansible.executor.task_queue_manager import TaskQueueManager\nfrom ansible.module_utils._text import to_native, to_text\nfrom ansible.playbook import Playbook\nfrom ansible.template import Templar\nfrom ansible.utils.helpers import pct_to_int\nfrom ansible.utils.path import makedirs_safe\n\ntry:\n from __main__ import display\nexcept ImportError:\n from ansible.utils.display import Display\n display = Display()\n\n\nclass PlaybookExecutor:\n\n '''\n This is the primary class for executing playbooks, and thus the\n basis for bin/ansible-playbook operation.\n '''\n\n def __init__(self, playbooks, inventory, variable_manager, loader, options, passwords):\n self._playbooks = playbooks\n self._inventory = inventory\n self._variable_manager = variable_manager\n self._loader = loader\n self._options = options\n self.passwords = passwords\n self._unreachable_hosts = dict()\n\n if options.listhosts or options.listtasks or options.listtags or options.syntax:\n self._tqm = None\n else:\n self._tqm = TaskQueueManager(inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords=self.passwords)\n\n def run(self):\n\n '''\n Run the given playbook, based on the settings in the play which\n may limit the runs to serialized groups, etc.\n '''\n\n result = 0\n entrylist = []\n entry = {}\n try:\n for playbook_path in self._playbooks:\n pb = Playbook.load(playbook_path, variable_manager=self._variable_manager, loader=self._loader)\n self._inventory.set_playbook_basedir(os.path.realpath(os.path.dirname(playbook_path)))\n\n if self._tqm is None: # we are doing a listing\n entry = {'playbook': playbook_path}\n entry['plays'] = []\n else:\n # make sure the tqm has callbacks loaded\n self._tqm.load_callbacks()\n self._tqm.send_callback('v2_playbook_on_start', pb)\n\n i = 1\n plays = pb.get_plays()\n display.vv(u'%d plays in %s' % (len(plays), to_text(playbook_path)))\n\n for play in plays:\n if play._included_path is not None:\n self._loader.set_basedir(play._included_path)\n else:\n self._loader.set_basedir(pb._basedir)\n\n # clear any filters which may have been applied to the inventory\n self._inventory.remove_restriction()\n\n if play.vars_prompt:\n for var in play.vars_prompt:\n vname = var['name']\n prompt = var.get(\"prompt\", vname)\n default = var.get(\"default\", None)\n private = var.get(\"private\", True)\n confirm = var.get(\"confirm\", False)\n encrypt = var.get(\"encrypt\", None)\n salt_size = var.get(\"salt_size\", None)\n salt = var.get(\"salt\", None)\n\n if vname not in self._variable_manager.extra_vars:\n if self._tqm:\n self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default)\n play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default)\n else: # we are either in --list-<option> or syntax check\n play.vars[vname] = default\n\n # Create a temporary copy of the play here, so we can run post_validate\n # on it without the templating changes affecting the original object.\n all_vars = self._variable_manager.get_vars(loader=self._loader, play=play)\n templar = Templar(loader=self._loader, variables=all_vars)\n new_play = play.copy()\n new_play.post_validate(templar)\n\n if self._options.syntax:\n continue\n\n if self._tqm is None:\n # we are just doing a listing\n entry['plays'].append(new_play)\n\n else:\n self._tqm._unreachable_hosts.update(self._unreachable_hosts)\n\n previously_failed = len(self._tqm._failed_hosts)\n previously_unreachable = len(self._tqm._unreachable_hosts)\n\n break_play = False\n # we are actually running plays\n for batch in self._get_serialized_batches(new_play):\n if len(batch) == 0:\n self._tqm.send_callback('v2_playbook_on_play_start', new_play)\n self._tqm.send_callback('v2_playbook_on_no_hosts_matched')\n break\n\n # restrict the inventory to the hosts in the serialized batch\n self._inventory.restrict_to_hosts(batch)\n # and run it...\n result = self._tqm.run(play=play)\n\n # break the play if the result equals the special return code\n if result & self._tqm.RUN_FAILED_BREAK_PLAY != 0:\n result = self._tqm.RUN_FAILED_HOSTS\n break_play = True\n\n # check the number of failures here, to see if they're above the maximum\n # failure percentage allowed, or if any errors are fatal. If either of those\n # conditions are met, we break out, otherwise we only break out if the entire\n # batch failed\n failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) - \\\n (previously_failed + previously_unreachable)\n\n if len(batch) == failed_hosts_count:\n break_play = True\n break\n\n # update the previous counts so they don't accumulate incorrectly\n # over multiple serial batches\n previously_failed += len(self._tqm._failed_hosts) - previously_failed\n previously_unreachable += len(self._tqm._unreachable_hosts) - previously_unreachable\n\n # save the unreachable hosts from this batch\n self._unreachable_hosts.update(self._tqm._unreachable_hosts)\n\n if break_play:\n break\n\n i = i + 1 # per play\n\n if entry:\n entrylist.append(entry) # per playbook\n\n # send the stats callback for this playbook\n if self._tqm is not None:\n if C.RETRY_FILES_ENABLED:\n retries = set(self._tqm._failed_hosts.keys())\n retries.update(self._tqm._unreachable_hosts.keys())\n retries = sorted(retries)\n if len(retries) > 0:\n if C.RETRY_FILES_SAVE_PATH:\n basedir = C.shell_expand(C.RETRY_FILES_SAVE_PATH)\n elif playbook_path:\n basedir = os.path.dirname(playbook_path)\n else:\n basedir = '~/'\n\n (retry_name, _) = os.path.splitext(os.path.basename(playbook_path))\n filename = os.path.join(basedir, \"%s.retry\" % retry_name)\n if self._generate_retry_inventory(filename, retries):\n display.display(\"\\tto retry, use: --limit @%s\\n\" % filename)\n\n self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats)\n\n # if the last result wasn't zero, break out of the playbook file name loop\n if result != 0:\n break\n\n if entrylist:\n return entrylist\n\n finally:\n if self._tqm is not None:\n self._tqm.cleanup()\n if self._loader:\n self._loader.cleanup_all_tmp_files()\n\n if self._options.syntax:\n display.display(\"No issues encountered\")\n return result\n\n return result\n\n def _get_serialized_batches(self, play):\n '''\n Returns a list of hosts, subdivided into batches based on\n the serial size specified in the play.\n '''\n\n # make sure we have a unique list of hosts\n all_hosts = self._inventory.get_hosts(play.hosts)\n all_hosts_len = len(all_hosts)\n\n # the serial value can be listed as a scalar or a list of\n # scalars, so we make sure it's a list here\n serial_batch_list = play.serial\n if len(serial_batch_list) == 0:\n serial_batch_list = [-1]\n\n cur_item = 0\n serialized_batches = []\n\n while len(all_hosts) > 0:\n # get the serial value from current item in the list\n serial = pct_to_int(serial_batch_list[cur_item], all_hosts_len)\n\n # if the serial count was not specified or is invalid, default to\n # a list of all hosts, otherwise grab a chunk of the hosts equal\n # to the current serial item size\n if serial <= 0:\n serialized_batches.append(all_hosts)\n break\n else:\n play_hosts = []\n for x in range(serial):\n if len(all_hosts) > 0:\n play_hosts.append(all_hosts.pop(0))\n\n serialized_batches.append(play_hosts)\n\n # increment the current batch list item number, and if we've hit\n # the end keep using the last element until we've consumed all of\n # the hosts in the inventory\n cur_item += 1\n if cur_item > len(serial_batch_list) - 1:\n cur_item = len(serial_batch_list) - 1\n\n return serialized_batches\n\n def _generate_retry_inventory(self, retry_path, replay_hosts):\n '''\n Called when a playbook run fails. It generates an inventory which allows\n re-running on ONLY the failed hosts. This may duplicate some variable\n information in group_vars/host_vars but that is ok, and expected.\n '''\n try:\n makedirs_safe(os.path.dirname(retry_path))\n with open(retry_path, 'w') as fd:\n for x in replay_hosts:\n fd.write(\"%s\\n\" % x)\n except Exception as e:\n display.warning(\"Could not create retry file '%s'.\\n\\t%s\" % (retry_path, to_native(e)))\n return False\n\n return True\n", "path": "lib/ansible/executor/playbook_executor.py"}]} | 4,074 | 126 |
gh_patches_debug_4660 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2935 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
False positive for CKV_AZURE_43 when using the "random" provider resources
**Describe the issue**
Check ID: CKV_AZURE_43
When using any of the random_* resources from the [random provider](https://registry.terraform.io/providers/hashicorp/random/latest/docs) check CKV_AZURE_43 fails.
StorageAccountName.py probably needs the VARIABLE_REFS list expanded to include the random_* resources.
**Examples**
```
resource "random_string" "random" {
length = 4
number = true
lower = false
special = false
upper = false
}
resource "azurerm_storage_account" "vmstorageaccount" {
name = "storage${random_string.random}"
....
}
```
**Version:**
- Checkov Version 2.0.113
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/StorageAccountName.py]
1 import re
2 from typing import List, Dict, Any
3
4 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
5 from checkov.common.models.enums import CheckResult, CheckCategories
6
7 STO_NAME_REGEX = re.compile(r"^[a-z0-9]{3,24}$")
8 VARIABLE_REFS = ("local.", "module.", "var.")
9
10
11 class StorageAccountName(BaseResourceCheck):
12 def __init__(self) -> None:
13 name = "Ensure Storage Accounts adhere to the naming rules"
14 id = "CKV_AZURE_43"
15 supported_resources = ["azurerm_storage_account"]
16 categories = [CheckCategories.CONVENTION]
17 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
18
19 def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:
20 """
21 The Storage Account naming reference:
22 https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts
23 :param conf: azurerm_storage_account configuration
24 :return: <CheckResult>
25 """
26 name = conf.get("name")
27 if name:
28 name = str(name[0])
29 if any(x in name for x in VARIABLE_REFS):
30 # in the case we couldn't evaluate the name, just ignore
31 return CheckResult.UNKNOWN
32 if re.findall(STO_NAME_REGEX, str(conf["name"][0])):
33 return CheckResult.PASSED
34
35 return CheckResult.FAILED
36
37 def get_evaluated_keys(self) -> List[str]:
38 return ["name"]
39
40
41 check = StorageAccountName()
42
[end of checkov/terraform/checks/resource/azure/StorageAccountName.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/azure/StorageAccountName.py b/checkov/terraform/checks/resource/azure/StorageAccountName.py
--- a/checkov/terraform/checks/resource/azure/StorageAccountName.py
+++ b/checkov/terraform/checks/resource/azure/StorageAccountName.py
@@ -5,7 +5,7 @@
from checkov.common.models.enums import CheckResult, CheckCategories
STO_NAME_REGEX = re.compile(r"^[a-z0-9]{3,24}$")
-VARIABLE_REFS = ("local.", "module.", "var.")
+VARIABLE_REFS = ("local.", "module.", "var.", "random_string.", "random_id.", "random_integer.", "random_pet.")
class StorageAccountName(BaseResourceCheck):
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/StorageAccountName.py b/checkov/terraform/checks/resource/azure/StorageAccountName.py\n--- a/checkov/terraform/checks/resource/azure/StorageAccountName.py\n+++ b/checkov/terraform/checks/resource/azure/StorageAccountName.py\n@@ -5,7 +5,7 @@\n from checkov.common.models.enums import CheckResult, CheckCategories\n \n STO_NAME_REGEX = re.compile(r\"^[a-z0-9]{3,24}$\")\n-VARIABLE_REFS = (\"local.\", \"module.\", \"var.\")\n+VARIABLE_REFS = (\"local.\", \"module.\", \"var.\", \"random_string.\", \"random_id.\", \"random_integer.\", \"random_pet.\")\n \n \n class StorageAccountName(BaseResourceCheck):\n", "issue": "False positive for CKV_AZURE_43 when using the \"random\" provider resources\n**Describe the issue**\r\nCheck ID: CKV_AZURE_43\r\nWhen using any of the random_* resources from the [random provider](https://registry.terraform.io/providers/hashicorp/random/latest/docs) check CKV_AZURE_43 fails.\r\n\r\nStorageAccountName.py probably needs the VARIABLE_REFS list expanded to include the random_* resources.\r\n\r\n**Examples**\r\n```\r\nresource \"random_string\" \"random\" {\r\n length = 4\r\n number = true\r\n lower = false\r\n special = false\r\n upper = false\r\n}\r\n\r\nresource \"azurerm_storage_account\" \"vmstorageaccount\" {\r\n name = \"storage${random_string.random}\"\r\n ....\r\n}\r\n```\r\n\r\n**Version:**\r\n - Checkov Version 2.0.113\n", "before_files": [{"content": "import re\nfrom typing import List, Dict, Any\n\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\nSTO_NAME_REGEX = re.compile(r\"^[a-z0-9]{3,24}$\")\nVARIABLE_REFS = (\"local.\", \"module.\", \"var.\")\n\n\nclass StorageAccountName(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure Storage Accounts adhere to the naming rules\"\n id = \"CKV_AZURE_43\"\n supported_resources = [\"azurerm_storage_account\"]\n categories = [CheckCategories.CONVENTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:\n \"\"\"\n The Storage Account naming reference:\n https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts\n :param conf: azurerm_storage_account configuration\n :return: <CheckResult>\n \"\"\"\n name = conf.get(\"name\")\n if name:\n name = str(name[0])\n if any(x in name for x in VARIABLE_REFS):\n # in the case we couldn't evaluate the name, just ignore\n return CheckResult.UNKNOWN\n if re.findall(STO_NAME_REGEX, str(conf[\"name\"][0])):\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n def get_evaluated_keys(self) -> List[str]:\n return [\"name\"]\n\n\ncheck = StorageAccountName()\n", "path": "checkov/terraform/checks/resource/azure/StorageAccountName.py"}]} | 1,169 | 167 |
gh_patches_debug_14368 | rasdani/github-patches | git_diff | scrapy__scrapy-1131 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unhandled error in Deferred (RobotsTxtMiddleware)
**Dev story**:
Let's say spider downloads all .zip files from http://habrahabr.ru/post/212029/ page
Url with .zip files looks like this: http://layer6.jenkins.tox.im/job/qt_gui_win32/lastSuccessfulBuild/artifact/qt/build/release/TOX-Qt-GUI.zip
It's a polite spider, so settings file contains:
`ROBOTSTXT_OBEY = True`
Middleware parses habrahabr.ru robots.txt file as well as 'external' robots.txt file from layer6.jenkins.tox.im. It's expected behaviour.
But if request will be returned with error then the output would be:
```
2015-04-02 17:06:16+0300 [habrahabr] DEBUG: Gave up retrying <GET http://layer6.jenkins.tox.im/robots.txt> (failed 1 times): DNS lookup failed: address 'layer6.jenkins.tox.im' not found: [Errno 8] nodename nor servname provided, or not known.
2015-04-02 17:06:16+0300 [-] ERROR: Unhandled error in Deferred:
2015-04-02 17:06:16+0300 [-] Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.error.DNSLookupError: DNS lookup failed: address 'layer6.jenkins.tox.im' not found: [Errno 8] nodename nor servname provided, or not known.
```
</issue>
<code>
[start of scrapy/contrib/downloadermiddleware/robotstxt.py]
1 """
2 This is a middleware to respect robots.txt policies. To activate it you must
3 enable this middleware and enable the ROBOTSTXT_OBEY setting.
4
5 """
6
7 from six.moves.urllib import robotparser
8
9 from scrapy import signals, log
10 from scrapy.exceptions import NotConfigured, IgnoreRequest
11 from scrapy.http import Request
12 from scrapy.utils.httpobj import urlparse_cached
13
14
15 class RobotsTxtMiddleware(object):
16 DOWNLOAD_PRIORITY = 1000
17
18 def __init__(self, crawler):
19 if not crawler.settings.getbool('ROBOTSTXT_OBEY'):
20 raise NotConfigured
21
22 self.crawler = crawler
23 self._useragent = crawler.settings.get('USER_AGENT')
24 self._parsers = {}
25
26 @classmethod
27 def from_crawler(cls, crawler):
28 return cls(crawler)
29
30 def process_request(self, request, spider):
31 if request.meta.get('dont_obey_robotstxt'):
32 return
33 rp = self.robot_parser(request, spider)
34 if rp and not rp.can_fetch(self._useragent, request.url):
35 log.msg(format="Forbidden by robots.txt: %(request)s",
36 level=log.DEBUG, request=request)
37 raise IgnoreRequest
38
39 def robot_parser(self, request, spider):
40 url = urlparse_cached(request)
41 netloc = url.netloc
42 if netloc not in self._parsers:
43 self._parsers[netloc] = None
44 robotsurl = "%s://%s/robots.txt" % (url.scheme, url.netloc)
45 robotsreq = Request(
46 robotsurl,
47 priority=self.DOWNLOAD_PRIORITY,
48 meta={'dont_obey_robotstxt': True}
49 )
50 dfd = self.crawler.engine.download(robotsreq, spider)
51 dfd.addCallback(self._parse_robots)
52 return self._parsers[netloc]
53
54 def _parse_robots(self, response):
55 rp = robotparser.RobotFileParser(response.url)
56 rp.parse(response.body.splitlines())
57 self._parsers[urlparse_cached(response).netloc] = rp
58
[end of scrapy/contrib/downloadermiddleware/robotstxt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/contrib/downloadermiddleware/robotstxt.py b/scrapy/contrib/downloadermiddleware/robotstxt.py
--- a/scrapy/contrib/downloadermiddleware/robotstxt.py
+++ b/scrapy/contrib/downloadermiddleware/robotstxt.py
@@ -49,8 +49,14 @@
)
dfd = self.crawler.engine.download(robotsreq, spider)
dfd.addCallback(self._parse_robots)
+ dfd.addErrback(self._logerror, robotsreq, spider)
return self._parsers[netloc]
+ def _logerror(self, failure, request, spider):
+ if failure.type is not IgnoreRequest:
+ log.msg(format="Error downloading %%(request)s: %s" % failure.value,
+ level=log.ERROR, request=request, spider=spider)
+
def _parse_robots(self, response):
rp = robotparser.RobotFileParser(response.url)
rp.parse(response.body.splitlines())
| {"golden_diff": "diff --git a/scrapy/contrib/downloadermiddleware/robotstxt.py b/scrapy/contrib/downloadermiddleware/robotstxt.py\n--- a/scrapy/contrib/downloadermiddleware/robotstxt.py\n+++ b/scrapy/contrib/downloadermiddleware/robotstxt.py\n@@ -49,8 +49,14 @@\n )\n dfd = self.crawler.engine.download(robotsreq, spider)\n dfd.addCallback(self._parse_robots)\n+ dfd.addErrback(self._logerror, robotsreq, spider)\n return self._parsers[netloc]\n \n+ def _logerror(self, failure, request, spider):\n+ if failure.type is not IgnoreRequest:\n+ log.msg(format=\"Error downloading %%(request)s: %s\" % failure.value,\n+ level=log.ERROR, request=request, spider=spider)\n+\n def _parse_robots(self, response):\n rp = robotparser.RobotFileParser(response.url)\n rp.parse(response.body.splitlines())\n", "issue": "Unhandled error in Deferred (RobotsTxtMiddleware)\n**Dev story**:\nLet's say spider downloads all .zip files from http://habrahabr.ru/post/212029/ page\nUrl with .zip files looks like this: http://layer6.jenkins.tox.im/job/qt_gui_win32/lastSuccessfulBuild/artifact/qt/build/release/TOX-Qt-GUI.zip\n\nIt's a polite spider, so settings file contains:\n`ROBOTSTXT_OBEY = True`\n\nMiddleware parses habrahabr.ru robots.txt file as well as 'external' robots.txt file from layer6.jenkins.tox.im. It's expected behaviour. \nBut if request will be returned with error then the output would be:\n\n```\n2015-04-02 17:06:16+0300 [habrahabr] DEBUG: Gave up retrying <GET http://layer6.jenkins.tox.im/robots.txt> (failed 1 times): DNS lookup failed: address 'layer6.jenkins.tox.im' not found: [Errno 8] nodename nor servname provided, or not known.\n\n2015-04-02 17:06:16+0300 [-] ERROR: Unhandled error in Deferred:\n2015-04-02 17:06:16+0300 [-] Unhandled Error\n Traceback (most recent call last):\n Failure: twisted.internet.error.DNSLookupError: DNS lookup failed: address 'layer6.jenkins.tox.im' not found: [Errno 8] nodename nor servname provided, or not known.\n```\n\n", "before_files": [{"content": "\"\"\"\nThis is a middleware to respect robots.txt policies. To activate it you must\nenable this middleware and enable the ROBOTSTXT_OBEY setting.\n\n\"\"\"\n\nfrom six.moves.urllib import robotparser\n\nfrom scrapy import signals, log\nfrom scrapy.exceptions import NotConfigured, IgnoreRequest\nfrom scrapy.http import Request\nfrom scrapy.utils.httpobj import urlparse_cached\n\n\nclass RobotsTxtMiddleware(object):\n DOWNLOAD_PRIORITY = 1000\n\n def __init__(self, crawler):\n if not crawler.settings.getbool('ROBOTSTXT_OBEY'):\n raise NotConfigured\n\n self.crawler = crawler\n self._useragent = crawler.settings.get('USER_AGENT')\n self._parsers = {}\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler)\n\n def process_request(self, request, spider):\n if request.meta.get('dont_obey_robotstxt'):\n return\n rp = self.robot_parser(request, spider)\n if rp and not rp.can_fetch(self._useragent, request.url):\n log.msg(format=\"Forbidden by robots.txt: %(request)s\",\n level=log.DEBUG, request=request)\n raise IgnoreRequest\n\n def robot_parser(self, request, spider):\n url = urlparse_cached(request)\n netloc = url.netloc\n if netloc not in self._parsers:\n self._parsers[netloc] = None\n robotsurl = \"%s://%s/robots.txt\" % (url.scheme, url.netloc)\n robotsreq = Request(\n robotsurl,\n priority=self.DOWNLOAD_PRIORITY,\n meta={'dont_obey_robotstxt': True}\n )\n dfd = self.crawler.engine.download(robotsreq, spider)\n dfd.addCallback(self._parse_robots)\n return self._parsers[netloc]\n\n def _parse_robots(self, response):\n rp = robotparser.RobotFileParser(response.url)\n rp.parse(response.body.splitlines())\n self._parsers[urlparse_cached(response).netloc] = rp\n", "path": "scrapy/contrib/downloadermiddleware/robotstxt.py"}]} | 1,458 | 217 |
gh_patches_debug_3525 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1747 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
port: turn memory scope includesnapshot to false (#5441)
The changes in [turn memory scope includesnapshot to false (#5441)](https://github.com/microsoft/botbuilder-dotnet/pull/5441) may need to be ported to maintain parity with `microsoft/botbuilder-dotnet`.
<blockquote>
Fixes #5432
</blockquote>
Please review and, if necessary, port the changes.
</issue>
<code>
[start of libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.dialogs.memory import scope_path
5
6 from .memory_scope import MemoryScope
7
8
9 class CaseInsensitiveDict(dict):
10 # pylint: disable=protected-access
11
12 @classmethod
13 def _k(cls, key):
14 return key.lower() if isinstance(key, str) else key
15
16 def __init__(self, *args, **kwargs):
17 super(CaseInsensitiveDict, self).__init__(*args, **kwargs)
18 self._convert_keys()
19
20 def __getitem__(self, key):
21 return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))
22
23 def __setitem__(self, key, value):
24 super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)
25
26 def __delitem__(self, key):
27 return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))
28
29 def __contains__(self, key):
30 return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))
31
32 def pop(self, key, *args, **kwargs):
33 return super(CaseInsensitiveDict, self).pop(
34 self.__class__._k(key), *args, **kwargs
35 )
36
37 def get(self, key, *args, **kwargs):
38 return super(CaseInsensitiveDict, self).get(
39 self.__class__._k(key), *args, **kwargs
40 )
41
42 def setdefault(self, key, *args, **kwargs):
43 return super(CaseInsensitiveDict, self).setdefault(
44 self.__class__._k(key), *args, **kwargs
45 )
46
47 def update(self, e=None, **f):
48 if e is None:
49 e = {}
50 super(CaseInsensitiveDict, self).update(self.__class__(e))
51 super(CaseInsensitiveDict, self).update(self.__class__(**f))
52
53 def _convert_keys(self):
54 for k in list(self.keys()):
55 val = super(CaseInsensitiveDict, self).pop(k)
56 self.__setitem__(k, val)
57
58
59 class TurnMemoryScope(MemoryScope):
60 def __init__(self):
61 super().__init__(scope_path.TURN)
62
63 def get_memory(self, dialog_context: "DialogContext") -> object:
64 if not dialog_context:
65 raise TypeError(f"Expecting: DialogContext, but received None")
66
67 turn_value = dialog_context.context.turn_state.get(scope_path.TURN, None)
68
69 if not turn_value:
70 turn_value = CaseInsensitiveDict()
71 dialog_context.context.turn_state[scope_path.TURN] = turn_value
72
73 return turn_value
74
75 def set_memory(self, dialog_context: "DialogContext", memory: object):
76 if not dialog_context:
77 raise TypeError(f"Expecting: DialogContext, but received None")
78
79 dialog_context.context.turn_state[scope_path.TURN] = memory
80
[end of libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py
--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py
+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py
@@ -58,7 +58,7 @@
class TurnMemoryScope(MemoryScope):
def __init__(self):
- super().__init__(scope_path.TURN)
+ super().__init__(scope_path.TURN, False)
def get_memory(self, dialog_context: "DialogContext") -> object:
if not dialog_context:
| {"golden_diff": "diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py\n--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py\n+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py\n@@ -58,7 +58,7 @@\n \n class TurnMemoryScope(MemoryScope):\n def __init__(self):\n- super().__init__(scope_path.TURN)\n+ super().__init__(scope_path.TURN, False)\n \n def get_memory(self, dialog_context: \"DialogContext\") -> object:\n if not dialog_context:\n", "issue": "port: turn memory scope includesnapshot to false (#5441)\nThe changes in [turn memory scope includesnapshot to false (#5441)](https://github.com/microsoft/botbuilder-dotnet/pull/5441) may need to be ported to maintain parity with `microsoft/botbuilder-dotnet`.\n\n<blockquote>\nFixes #5432\n</blockquote>\n\nPlease review and, if necessary, port the changes.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.dialogs.memory import scope_path\n\nfrom .memory_scope import MemoryScope\n\n\nclass CaseInsensitiveDict(dict):\n # pylint: disable=protected-access\n\n @classmethod\n def _k(cls, key):\n return key.lower() if isinstance(key, str) else key\n\n def __init__(self, *args, **kwargs):\n super(CaseInsensitiveDict, self).__init__(*args, **kwargs)\n self._convert_keys()\n\n def __getitem__(self, key):\n return super(CaseInsensitiveDict, self).__getitem__(self.__class__._k(key))\n\n def __setitem__(self, key, value):\n super(CaseInsensitiveDict, self).__setitem__(self.__class__._k(key), value)\n\n def __delitem__(self, key):\n return super(CaseInsensitiveDict, self).__delitem__(self.__class__._k(key))\n\n def __contains__(self, key):\n return super(CaseInsensitiveDict, self).__contains__(self.__class__._k(key))\n\n def pop(self, key, *args, **kwargs):\n return super(CaseInsensitiveDict, self).pop(\n self.__class__._k(key), *args, **kwargs\n )\n\n def get(self, key, *args, **kwargs):\n return super(CaseInsensitiveDict, self).get(\n self.__class__._k(key), *args, **kwargs\n )\n\n def setdefault(self, key, *args, **kwargs):\n return super(CaseInsensitiveDict, self).setdefault(\n self.__class__._k(key), *args, **kwargs\n )\n\n def update(self, e=None, **f):\n if e is None:\n e = {}\n super(CaseInsensitiveDict, self).update(self.__class__(e))\n super(CaseInsensitiveDict, self).update(self.__class__(**f))\n\n def _convert_keys(self):\n for k in list(self.keys()):\n val = super(CaseInsensitiveDict, self).pop(k)\n self.__setitem__(k, val)\n\n\nclass TurnMemoryScope(MemoryScope):\n def __init__(self):\n super().__init__(scope_path.TURN)\n\n def get_memory(self, dialog_context: \"DialogContext\") -> object:\n if not dialog_context:\n raise TypeError(f\"Expecting: DialogContext, but received None\")\n\n turn_value = dialog_context.context.turn_state.get(scope_path.TURN, None)\n\n if not turn_value:\n turn_value = CaseInsensitiveDict()\n dialog_context.context.turn_state[scope_path.TURN] = turn_value\n\n return turn_value\n\n def set_memory(self, dialog_context: \"DialogContext\", memory: object):\n if not dialog_context:\n raise TypeError(f\"Expecting: DialogContext, but received None\")\n\n dialog_context.context.turn_state[scope_path.TURN] = memory\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/memory/scopes/turn_memory_scope.py"}]} | 1,457 | 167 |
gh_patches_debug_14752 | rasdani/github-patches | git_diff | pymodbus-dev__pymodbus-2186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ssl.SSLWantReadError: The operation did not complete (read) when using ModbusTlsClient
<!--
Before opening a new issue, make sure you do the following:
- Check that your issue isn't already filed: https://github.com/pymodbus-dev/pymodbus/issues
- Check the discussions forum https://github.com/pymodbus-dev/pymodbus/discussions
- Prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus
-->
### Versions
- Python: 3.9.9
- OS: MacOS (14.4.1)
- Pymodbus: 3.6.8
- Modbus Hardware (if used):
### Pymodbus Specific
- Server: tls - async
- Client: tls - sync
### Description
I'm starting the async tls modbus server using certificates from the example folder and function `StartAsyncTlsServer` with defining a few slaves. After that, I tried to read some slaves using the client (ModbusTlsClient) and received the exception `Modbus Error: The operation did not complete (read) (_ssl.c:2633)`.
I read slave 1 `python client.py --slave_id 1 --address 0 --count 1 `
### Code and Logs
Client code is following:
```python
import argparse
import pymodbus
from pymodbus.client import ModbusTcpClient, ModbusTlsClient
from pymodbus.exceptions import ConnectionException, ModbusIOException
pymodbus.pymodbus_apply_logging_config("DEBUG")
def main():
parser = argparse.ArgumentParser(description="Modbus client")
parser.add_argument("--write", action="store_true", help="Write mode")
parser.add_argument("--host", type=str, help="Host", default="localhost")
parser.add_argument("--port", type=int, help="Port", default=502)
parser.add_argument("--slave_id", type=int, help="Slave ID")
parser.add_argument("--address", type=int, help="Address")
parser.add_argument("--count", type=int, help="Count of registers to read", default=1)
parser.add_argument("--value", type=int, help="Value to write", default=0)
args = parser.parse_args()
try:
client = ModbusTlsClient(
args.host,
args.port,
certfile="certificates/pymodbus.crt",
keyfile="certificates/pymodbus.key",
server_hostname="localhost",
)
client.connect()
if args.write:
result = client.write_register(args.address, args.value, slave=args.slave_id)
else:
result = client.read_holding_registers(args.address, args.count, slave=args.slave_id)
if result.isError():
print(f"ModbusClient Error, the id {args.slave_id} or address {args.address} is invalid")
print(result)
return
except (ConnectionException, ModbusIOException) as e:
print(f"Error connecting to {args.host}:{args.port} ({str(e)})")
return
print("Results:")
print(result.registers)
if __name__ == "__main__":
main()
```
### Server logs
```
2024-05-02 12:13:16,849 DEBUG logging:103 Awaiting connections server_listener
2024-05-02 12:13:16,850 INFO logging:97 Server listening.
2024-05-02 12:13:29,453 DEBUG logging:103 Connected to server
2024-05-02 12:13:29,453 DEBUG logging:103 recv: 0x3 0x0 0x0 0x0 0x1 old_data: addr=None
2024-05-02 12:13:29,453 DEBUG logging:103 Handling data: 0x3 0x0 0x0 0x0 0x1
2024-05-02 12:13:29,453 DEBUG logging:103 Processing: 0x3 0x0 0x0 0x0 0x1
2024-05-02 12:13:29,453 DEBUG logging:103 Factory Request[ReadHoldingRegistersRequest': 3]
2024-05-02 12:13:29,454 ERROR logging:115 requested slave does not exist: 0
2024-05-02 12:13:29,454 ERROR logging:115 Exception response Exception Response(131, 3, GatewayNoResponse)
2024-05-02 12:13:29,454 DEBUG logging:103 send: 0x83 0xb
2024-05-02 12:13:29,454 DEBUG logging:103 -> transport: received eof
2024-05-02 12:13:29,454 DEBUG logging:103 Connection lost server due to None
2024-05-02 12:13:29,454 DEBUG logging:103 Handler for stream [server] has been canceled
```
### Client logs
```
2024-05-02 12:13:29,453 DEBUG logging:103 Current transaction state - IDLE
2024-05-02 12:13:29,453 DEBUG logging:103 Running transaction 1
2024-05-02 12:13:29,453 DEBUG logging:103 SEND: 0x3 0x0 0x0 0x0 0x1
2024-05-02 12:13:29,453 DEBUG logging:103 New Transaction state "SENDING"
2024-05-02 12:13:29,453 DEBUG logging:103 Changing transaction state from "SENDING" to "WAITING FOR REPLY"
2024-05-02 12:13:29,453 DEBUG logging:103 Transaction failed. (The operation did not complete (read) (_ssl.c:2633))
Traceback (most recent call last):
File "client.py", line 43, in <module>
main()
File "client.py", line 30, in main
result = client.read_holding_registers(args.address, args.count, slave=args.slave_id)
File ".venv/lib/python3.9/site-packages/pymodbus/client/mixin.py", line 107, in read_holding_registers
return self.execute(
File ".venv/lib/python3.9/site-packages/pymodbus/client/base.py", line 396, in execute
return self.transaction.execute(request)
File ".venv/lib/python3.9/site-packages/pymodbus/transaction.py", line 180, in execute
response, last_exception = self._transact(
File ".venv/lib/python3.9/site-packages/pymodbus/transaction.py", line 326, in _transact
result = self._recv(response_length, full)
File ".venv/lib/python3.9/site-packages/pymodbus/transaction.py", line 357, in _recv
read_min = self.client.framer.recvPacket(min_size)
File ".venv/lib/python3.9/site-packages/pymodbus/framer/base.py", line 79, in recvPacket
return self.client.recv(size)
File ".venv/lib/python3.9/site-packages/pymodbus/client/tcp.py", line 236, in recv
if (recv_data := self.socket.recv(recv_size)) == b"":
File "/Users/vmartyniak/.pyenv/versions/3.9.9/lib/python3.9/ssl.py", line 1227, in recv
return self.read(buflen)
File "/Users/vmartyniak/.pyenv/versions/3.9.9/lib/python3.9/ssl.py", line 1101, in read
return self._sslobj.read(len)
ssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:2633)
```
</issue>
<code>
[start of pymodbus/framer/old_framer_tls.py]
1 """TLS framer."""
2 import struct
3
4 from pymodbus.exceptions import (
5 ModbusIOException,
6 )
7 from pymodbus.framer.old_framer_base import TLS_FRAME_HEADER, ModbusFramer
8 from pymodbus.framer.tls import FramerTLS
9
10
11 # --------------------------------------------------------------------------- #
12 # Modbus TLS old framer
13 # --------------------------------------------------------------------------- #
14
15
16 class ModbusTlsFramer(ModbusFramer):
17 """Modbus TLS Frame controller.
18
19 No prefix MBAP header before decrypted PDU is used as a message frame for
20 Modbus Security Application Protocol. It allows us to easily separate
21 decrypted messages which is PDU as follows:
22
23 [ Function Code] [ Data ]
24 1b Nb
25 """
26
27 method = "tls"
28
29 def __init__(self, decoder, client=None):
30 """Initialize a new instance of the framer.
31
32 :param decoder: The decoder factory implementation to use
33 """
34 super().__init__(decoder, client)
35 self._hsize = 0x0
36 self.message_handler = FramerTLS()
37
38 def decode_data(self, data):
39 """Decode data."""
40 if len(data) > self._hsize:
41 (fcode,) = struct.unpack(TLS_FRAME_HEADER, data[0 : self._hsize + 1])
42 return {"fcode": fcode}
43 return {}
44
45 def frameProcessIncomingPacket(self, _single, callback, _slave, _tid=None, **kwargs):
46 """Process new packet pattern."""
47 # no slave id for Modbus Security Application Protocol
48
49 while True:
50 used_len, use_tid, dev_id, data = self.message_handler.decode(self._buffer)
51 if not data:
52 return
53 self._header["uid"] = dev_id
54 self._header["tid"] = use_tid
55 self._header["pid"] = 0
56
57 if (result := self.decoder.decode(data)) is None:
58 self.resetFrame()
59 raise ModbusIOException("Unable to decode request")
60 self.populateResult(result)
61 self._buffer = self._buffer[used_len:]
62 self._header = {"tid": 0, "pid": 0, "len": 0, "uid": 0}
63 callback(result) # defer or push to a thread?
64
[end of pymodbus/framer/old_framer_tls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pymodbus/framer/old_framer_tls.py b/pymodbus/framer/old_framer_tls.py
--- a/pymodbus/framer/old_framer_tls.py
+++ b/pymodbus/framer/old_framer_tls.py
@@ -1,5 +1,6 @@
"""TLS framer."""
import struct
+from time import sleep
from pymodbus.exceptions import (
ModbusIOException,
@@ -42,6 +43,11 @@
return {"fcode": fcode}
return {}
+ def recvPacket(self, size):
+ """Receive packet from the bus."""
+ sleep(0.5)
+ return super().recvPacket(size)
+
def frameProcessIncomingPacket(self, _single, callback, _slave, _tid=None, **kwargs):
"""Process new packet pattern."""
# no slave id for Modbus Security Application Protocol
| {"golden_diff": "diff --git a/pymodbus/framer/old_framer_tls.py b/pymodbus/framer/old_framer_tls.py\n--- a/pymodbus/framer/old_framer_tls.py\n+++ b/pymodbus/framer/old_framer_tls.py\n@@ -1,5 +1,6 @@\n \"\"\"TLS framer.\"\"\"\n import struct\n+from time import sleep\n \n from pymodbus.exceptions import (\n ModbusIOException,\n@@ -42,6 +43,11 @@\n return {\"fcode\": fcode}\n return {}\n \n+ def recvPacket(self, size):\n+ \"\"\"Receive packet from the bus.\"\"\"\n+ sleep(0.5)\n+ return super().recvPacket(size)\n+\n def frameProcessIncomingPacket(self, _single, callback, _slave, _tid=None, **kwargs):\n \"\"\"Process new packet pattern.\"\"\"\n # no slave id for Modbus Security Application Protocol\n", "issue": "ssl.SSLWantReadError: The operation did not complete (read) when using ModbusTlsClient\n<!--\r\nBefore opening a new issue, make sure you do the following:\r\n\r\n- Check that your issue isn't already filed: https://github.com/pymodbus-dev/pymodbus/issues\r\n- Check the discussions forum https://github.com/pymodbus-dev/pymodbus/discussions\r\n- Prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus\r\n-->\r\n\r\n### Versions\r\n\r\n- Python: 3.9.9\r\n- OS: MacOS (14.4.1)\r\n- Pymodbus: 3.6.8\r\n- Modbus Hardware (if used):\r\n\r\n### Pymodbus Specific\r\n\r\n- Server: tls - async\r\n- Client: tls - sync\r\n\r\n### Description\r\n\r\nI'm starting the async tls modbus server using certificates from the example folder and function `StartAsyncTlsServer` with defining a few slaves. After that, I tried to read some slaves using the client (ModbusTlsClient) and received the exception `Modbus Error: The operation did not complete (read) (_ssl.c:2633)`.\r\nI read slave 1 `python client.py --slave_id 1 --address 0 --count 1 `\r\n\r\n### Code and Logs\r\n\r\nClient code is following:\r\n```python\r\nimport argparse\r\nimport pymodbus\r\nfrom pymodbus.client import ModbusTcpClient, ModbusTlsClient\r\nfrom pymodbus.exceptions import ConnectionException, ModbusIOException\r\n\r\npymodbus.pymodbus_apply_logging_config(\"DEBUG\")\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description=\"Modbus client\")\r\n parser.add_argument(\"--write\", action=\"store_true\", help=\"Write mode\")\r\n parser.add_argument(\"--host\", type=str, help=\"Host\", default=\"localhost\")\r\n parser.add_argument(\"--port\", type=int, help=\"Port\", default=502)\r\n parser.add_argument(\"--slave_id\", type=int, help=\"Slave ID\")\r\n parser.add_argument(\"--address\", type=int, help=\"Address\")\r\n parser.add_argument(\"--count\", type=int, help=\"Count of registers to read\", default=1)\r\n parser.add_argument(\"--value\", type=int, help=\"Value to write\", default=0)\r\n\r\n args = parser.parse_args()\r\n try:\r\n client = ModbusTlsClient(\r\n args.host,\r\n args.port,\r\n certfile=\"certificates/pymodbus.crt\",\r\n keyfile=\"certificates/pymodbus.key\",\r\n server_hostname=\"localhost\",\r\n )\r\n client.connect()\r\n if args.write:\r\n result = client.write_register(args.address, args.value, slave=args.slave_id)\r\n else:\r\n result = client.read_holding_registers(args.address, args.count, slave=args.slave_id)\r\n if result.isError():\r\n print(f\"ModbusClient Error, the id {args.slave_id} or address {args.address} is invalid\")\r\n print(result)\r\n return\r\n except (ConnectionException, ModbusIOException) as e:\r\n print(f\"Error connecting to {args.host}:{args.port} ({str(e)})\")\r\n return\r\n print(\"Results:\")\r\n print(result.registers)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n\r\n```\r\n\r\n### Server logs\r\n```\r\n2024-05-02 12:13:16,849 DEBUG logging:103 Awaiting connections server_listener\r\n2024-05-02 12:13:16,850 INFO logging:97 Server listening.\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Connected to server\r\n2024-05-02 12:13:29,453 DEBUG logging:103 recv: 0x3 0x0 0x0 0x0 0x1 old_data: addr=None\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Handling data: 0x3 0x0 0x0 0x0 0x1\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Processing: 0x3 0x0 0x0 0x0 0x1\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Factory Request[ReadHoldingRegistersRequest': 3]\r\n2024-05-02 12:13:29,454 ERROR logging:115 requested slave does not exist: 0\r\n2024-05-02 12:13:29,454 ERROR logging:115 Exception response Exception Response(131, 3, GatewayNoResponse)\r\n2024-05-02 12:13:29,454 DEBUG logging:103 send: 0x83 0xb\r\n2024-05-02 12:13:29,454 DEBUG logging:103 -> transport: received eof\r\n2024-05-02 12:13:29,454 DEBUG logging:103 Connection lost server due to None\r\n2024-05-02 12:13:29,454 DEBUG logging:103 Handler for stream [server] has been canceled\r\n```\r\n\r\n### Client logs\r\n```\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Current transaction state - IDLE\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Running transaction 1\r\n2024-05-02 12:13:29,453 DEBUG logging:103 SEND: 0x3 0x0 0x0 0x0 0x1\r\n2024-05-02 12:13:29,453 DEBUG logging:103 New Transaction state \"SENDING\"\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Changing transaction state from \"SENDING\" to \"WAITING FOR REPLY\"\r\n2024-05-02 12:13:29,453 DEBUG logging:103 Transaction failed. (The operation did not complete (read) (_ssl.c:2633))\r\nTraceback (most recent call last):\r\n File \"client.py\", line 43, in <module>\r\n main()\r\n File \"client.py\", line 30, in main\r\n result = client.read_holding_registers(args.address, args.count, slave=args.slave_id)\r\n File \".venv/lib/python3.9/site-packages/pymodbus/client/mixin.py\", line 107, in read_holding_registers\r\n return self.execute(\r\n File \".venv/lib/python3.9/site-packages/pymodbus/client/base.py\", line 396, in execute\r\n return self.transaction.execute(request)\r\n File \".venv/lib/python3.9/site-packages/pymodbus/transaction.py\", line 180, in execute\r\n response, last_exception = self._transact(\r\n File \".venv/lib/python3.9/site-packages/pymodbus/transaction.py\", line 326, in _transact\r\n result = self._recv(response_length, full)\r\n File \".venv/lib/python3.9/site-packages/pymodbus/transaction.py\", line 357, in _recv\r\n read_min = self.client.framer.recvPacket(min_size)\r\n File \".venv/lib/python3.9/site-packages/pymodbus/framer/base.py\", line 79, in recvPacket\r\n return self.client.recv(size)\r\n File \".venv/lib/python3.9/site-packages/pymodbus/client/tcp.py\", line 236, in recv\r\n if (recv_data := self.socket.recv(recv_size)) == b\"\":\r\n File \"/Users/vmartyniak/.pyenv/versions/3.9.9/lib/python3.9/ssl.py\", line 1227, in recv\r\n return self.read(buflen)\r\n File \"/Users/vmartyniak/.pyenv/versions/3.9.9/lib/python3.9/ssl.py\", line 1101, in read\r\n return self._sslobj.read(len)\r\nssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:2633)\r\n```\r\n\n", "before_files": [{"content": "\"\"\"TLS framer.\"\"\"\nimport struct\n\nfrom pymodbus.exceptions import (\n ModbusIOException,\n)\nfrom pymodbus.framer.old_framer_base import TLS_FRAME_HEADER, ModbusFramer\nfrom pymodbus.framer.tls import FramerTLS\n\n\n# --------------------------------------------------------------------------- #\n# Modbus TLS old framer\n# --------------------------------------------------------------------------- #\n\n\nclass ModbusTlsFramer(ModbusFramer):\n \"\"\"Modbus TLS Frame controller.\n\n No prefix MBAP header before decrypted PDU is used as a message frame for\n Modbus Security Application Protocol. It allows us to easily separate\n decrypted messages which is PDU as follows:\n\n [ Function Code] [ Data ]\n 1b Nb\n \"\"\"\n\n method = \"tls\"\n\n def __init__(self, decoder, client=None):\n \"\"\"Initialize a new instance of the framer.\n\n :param decoder: The decoder factory implementation to use\n \"\"\"\n super().__init__(decoder, client)\n self._hsize = 0x0\n self.message_handler = FramerTLS()\n\n def decode_data(self, data):\n \"\"\"Decode data.\"\"\"\n if len(data) > self._hsize:\n (fcode,) = struct.unpack(TLS_FRAME_HEADER, data[0 : self._hsize + 1])\n return {\"fcode\": fcode}\n return {}\n\n def frameProcessIncomingPacket(self, _single, callback, _slave, _tid=None, **kwargs):\n \"\"\"Process new packet pattern.\"\"\"\n # no slave id for Modbus Security Application Protocol\n\n while True:\n used_len, use_tid, dev_id, data = self.message_handler.decode(self._buffer)\n if not data:\n return\n self._header[\"uid\"] = dev_id\n self._header[\"tid\"] = use_tid\n self._header[\"pid\"] = 0\n\n if (result := self.decoder.decode(data)) is None:\n self.resetFrame()\n raise ModbusIOException(\"Unable to decode request\")\n self.populateResult(result)\n self._buffer = self._buffer[used_len:]\n self._header = {\"tid\": 0, \"pid\": 0, \"len\": 0, \"uid\": 0}\n callback(result) # defer or push to a thread?\n", "path": "pymodbus/framer/old_framer_tls.py"}]} | 3,121 | 200 |
gh_patches_debug_8913 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2644 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
I re-read the specification. I think the requirement is to json encode the non-string attribute values not just the sequence type. For example `_check_value(True)` returns `'True'` which should actually be `'true'`.
I re-read the specification. I think the requirement is to json encode the non-string attribute values not just the sequence type. For example `_check_value(True)` returns `'True'` which should actually be `'true'`.
_Originally posted by @srikanthccv in https://github.com/open-telemetry/opentelemetry-python/pull/2642#discussion_r859218726_
</issue>
<code>
[start of exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.
17
18 Usage
19 -----
20
21 The **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_
22 metrics to `Prometheus`_.
23
24
25 .. _Prometheus: https://prometheus.io/
26 .. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
27
28 .. code:: python
29
30 from prometheus_client import start_http_server
31
32 from opentelemetry._metrics import get_meter_provider, set_meter_provider
33 from opentelemetry.exporter.prometheus import PrometheusMetricReader
34 from opentelemetry.sdk._metrics import MeterProvider
35
36 # Start Prometheus client
37 start_http_server(port=8000, addr="localhost")
38
39 # Exporter to export metrics to Prometheus
40 prefix = "MyAppPrefix"
41 reader = PrometheusMetricReader(prefix)
42
43 # Meter is responsible for creating and recording metrics
44 set_meter_provider(MeterProvider(metric_readers=[reader]))
45 meter = get_meter_provider().get_meter("myapp", "0.1.2")
46
47 counter = meter.create_counter(
48 "requests",
49 "requests",
50 "number of requests",
51 )
52
53 # Labels are used to identify key-values that are associated with a specific
54 # metric that you want to record. These are useful for pre-aggregation and can
55 # be used to store custom dimensions pertaining to a metric
56 labels = {"environment": "staging"}
57
58 counter.add(25, labels)
59 input("Press any key to exit...")
60
61 API
62 ---
63 """
64
65 from collections import deque
66 from itertools import chain
67 from json import dumps
68 from logging import getLogger
69 from re import IGNORECASE, UNICODE, compile
70 from typing import Iterable, Optional, Sequence, Tuple, Union
71
72 from prometheus_client import core
73
74 from opentelemetry.sdk._metrics.export import MetricReader
75 from opentelemetry.sdk._metrics.point import Gauge, Histogram, Metric, Sum
76
77 _logger = getLogger(__name__)
78
79
80 def _convert_buckets(metric: Metric) -> Sequence[Tuple[str, int]]:
81 buckets = []
82 total_count = 0
83 for upper_bound, count in zip(
84 chain(metric.point.explicit_bounds, ["+Inf"]),
85 metric.point.bucket_counts,
86 ):
87 total_count += count
88 buckets.append((f"{upper_bound}", total_count))
89
90 return buckets
91
92
93 class PrometheusMetricReader(MetricReader):
94 """Prometheus metric exporter for OpenTelemetry.
95
96 Args:
97 prefix: single-word application prefix relevant to the domain
98 the metric belongs to.
99 """
100
101 def __init__(self, prefix: str = "") -> None:
102 super().__init__()
103 self._collector = _CustomCollector(prefix)
104 core.REGISTRY.register(self._collector)
105 self._collector._callback = self.collect
106
107 def _receive_metrics(self, metrics: Iterable[Metric]) -> None:
108 if metrics is None:
109 return
110 self._collector.add_metrics_data(metrics)
111
112 def shutdown(self) -> bool:
113 core.REGISTRY.unregister(self._collector)
114 return True
115
116
117 class _CustomCollector:
118 """_CustomCollector represents the Prometheus Collector object
119
120 See more:
121 https://github.com/prometheus/client_python#custom-collectors
122 """
123
124 def __init__(self, prefix: str = ""):
125 self._prefix = prefix
126 self._callback = None
127 self._metrics_to_export = deque()
128 self._non_letters_digits_underscore_re = compile(
129 r"[^\w]", UNICODE | IGNORECASE
130 )
131
132 def add_metrics_data(self, export_records: Sequence[Metric]) -> None:
133 """Add metrics to Prometheus data"""
134 self._metrics_to_export.append(export_records)
135
136 def collect(self) -> None:
137 """Collect fetches the metrics from OpenTelemetry
138 and delivers them as Prometheus Metrics.
139 Collect is invoked every time a ``prometheus.Gatherer`` is run
140 for example when the HTTP endpoint is invoked by Prometheus.
141 """
142 if self._callback is not None:
143 self._callback()
144
145 while self._metrics_to_export:
146 for export_record in self._metrics_to_export.popleft():
147 prometheus_metric = self._translate_to_prometheus(
148 export_record
149 )
150 if prometheus_metric is not None:
151 yield prometheus_metric
152
153 def _translate_to_prometheus(
154 self, metric: Metric
155 ) -> Optional[core.Metric]:
156 prometheus_metric = None
157 label_values = []
158 label_keys = []
159 for key, value in metric.attributes.items():
160 label_keys.append(self._sanitize(key))
161 label_values.append(self._check_value(value))
162
163 metric_name = ""
164 if self._prefix != "":
165 metric_name = self._prefix + "_"
166 metric_name += self._sanitize(metric.name)
167
168 description = metric.description or ""
169 if isinstance(metric.point, Sum):
170 prometheus_metric = core.CounterMetricFamily(
171 name=metric_name,
172 documentation=description,
173 labels=label_keys,
174 unit=metric.unit,
175 )
176 prometheus_metric.add_metric(
177 labels=label_values, value=metric.point.value
178 )
179 elif isinstance(metric.point, Gauge):
180 prometheus_metric = core.GaugeMetricFamily(
181 name=metric_name,
182 documentation=description,
183 labels=label_keys,
184 unit=metric.unit,
185 )
186 prometheus_metric.add_metric(
187 labels=label_values, value=metric.point.value
188 )
189 elif isinstance(metric.point, Histogram):
190 value = metric.point.sum
191 prometheus_metric = core.HistogramMetricFamily(
192 name=metric_name,
193 documentation=description,
194 labels=label_keys,
195 unit=metric.unit,
196 )
197 buckets = _convert_buckets(metric)
198 prometheus_metric.add_metric(
199 labels=label_values, buckets=buckets, sum_value=value
200 )
201 else:
202 _logger.warning("Unsupported metric type. %s", type(metric.point))
203 return prometheus_metric
204
205 def _sanitize(self, key: str) -> str:
206 """sanitize the given metric name or label according to Prometheus rule.
207 Replace all characters other than [A-Za-z0-9_] with '_'.
208 """
209 return self._non_letters_digits_underscore_re.sub("_", key)
210
211 # pylint: disable=no-self-use
212 def _check_value(self, value: Union[int, float, str, Sequence]) -> str:
213 """Check the label value and return is appropriate representation"""
214 if not isinstance(value, str) and isinstance(value, Sequence):
215 return dumps(value, default=str)
216 return str(value)
217
[end of exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
--- a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
@@ -211,6 +211,6 @@
# pylint: disable=no-self-use
def _check_value(self, value: Union[int, float, str, Sequence]) -> str:
"""Check the label value and return is appropriate representation"""
- if not isinstance(value, str) and isinstance(value, Sequence):
+ if not isinstance(value, str):
return dumps(value, default=str)
return str(value)
| {"golden_diff": "diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n--- a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n@@ -211,6 +211,6 @@\n # pylint: disable=no-self-use\n def _check_value(self, value: Union[int, float, str, Sequence]) -> str:\n \"\"\"Check the label value and return is appropriate representation\"\"\"\n- if not isinstance(value, str) and isinstance(value, Sequence):\n+ if not isinstance(value, str):\n return dumps(value, default=str)\n return str(value)\n", "issue": "I re-read the specification. I think the requirement is to json encode the non-string attribute values not just the sequence type. For example `_check_value(True)` returns `'True'` which should actually be `'true'`.\nI re-read the specification. I think the requirement is to json encode the non-string attribute values not just the sequence type. For example `_check_value(True)` returns `'True'` which should actually be `'true'`.\r\n\r\n_Originally posted by @srikanthccv in https://github.com/open-telemetry/opentelemetry-python/pull/2642#discussion_r859218726_\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.\n\nUsage\n-----\n\nThe **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_\nmetrics to `Prometheus`_.\n\n\n.. _Prometheus: https://prometheus.io/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. code:: python\n\n from prometheus_client import start_http_server\n\n from opentelemetry._metrics import get_meter_provider, set_meter_provider\n from opentelemetry.exporter.prometheus import PrometheusMetricReader\n from opentelemetry.sdk._metrics import MeterProvider\n\n # Start Prometheus client\n start_http_server(port=8000, addr=\"localhost\")\n\n # Exporter to export metrics to Prometheus\n prefix = \"MyAppPrefix\"\n reader = PrometheusMetricReader(prefix)\n\n # Meter is responsible for creating and recording metrics\n set_meter_provider(MeterProvider(metric_readers=[reader]))\n meter = get_meter_provider().get_meter(\"myapp\", \"0.1.2\")\n\n counter = meter.create_counter(\n \"requests\",\n \"requests\",\n \"number of requests\",\n )\n\n # Labels are used to identify key-values that are associated with a specific\n # metric that you want to record. These are useful for pre-aggregation and can\n # be used to store custom dimensions pertaining to a metric\n labels = {\"environment\": \"staging\"}\n\n counter.add(25, labels)\n input(\"Press any key to exit...\")\n\nAPI\n---\n\"\"\"\n\nfrom collections import deque\nfrom itertools import chain\nfrom json import dumps\nfrom logging import getLogger\nfrom re import IGNORECASE, UNICODE, compile\nfrom typing import Iterable, Optional, Sequence, Tuple, Union\n\nfrom prometheus_client import core\n\nfrom opentelemetry.sdk._metrics.export import MetricReader\nfrom opentelemetry.sdk._metrics.point import Gauge, Histogram, Metric, Sum\n\n_logger = getLogger(__name__)\n\n\ndef _convert_buckets(metric: Metric) -> Sequence[Tuple[str, int]]:\n buckets = []\n total_count = 0\n for upper_bound, count in zip(\n chain(metric.point.explicit_bounds, [\"+Inf\"]),\n metric.point.bucket_counts,\n ):\n total_count += count\n buckets.append((f\"{upper_bound}\", total_count))\n\n return buckets\n\n\nclass PrometheusMetricReader(MetricReader):\n \"\"\"Prometheus metric exporter for OpenTelemetry.\n\n Args:\n prefix: single-word application prefix relevant to the domain\n the metric belongs to.\n \"\"\"\n\n def __init__(self, prefix: str = \"\") -> None:\n super().__init__()\n self._collector = _CustomCollector(prefix)\n core.REGISTRY.register(self._collector)\n self._collector._callback = self.collect\n\n def _receive_metrics(self, metrics: Iterable[Metric]) -> None:\n if metrics is None:\n return\n self._collector.add_metrics_data(metrics)\n\n def shutdown(self) -> bool:\n core.REGISTRY.unregister(self._collector)\n return True\n\n\nclass _CustomCollector:\n \"\"\"_CustomCollector represents the Prometheus Collector object\n\n See more:\n https://github.com/prometheus/client_python#custom-collectors\n \"\"\"\n\n def __init__(self, prefix: str = \"\"):\n self._prefix = prefix\n self._callback = None\n self._metrics_to_export = deque()\n self._non_letters_digits_underscore_re = compile(\n r\"[^\\w]\", UNICODE | IGNORECASE\n )\n\n def add_metrics_data(self, export_records: Sequence[Metric]) -> None:\n \"\"\"Add metrics to Prometheus data\"\"\"\n self._metrics_to_export.append(export_records)\n\n def collect(self) -> None:\n \"\"\"Collect fetches the metrics from OpenTelemetry\n and delivers them as Prometheus Metrics.\n Collect is invoked every time a ``prometheus.Gatherer`` is run\n for example when the HTTP endpoint is invoked by Prometheus.\n \"\"\"\n if self._callback is not None:\n self._callback()\n\n while self._metrics_to_export:\n for export_record in self._metrics_to_export.popleft():\n prometheus_metric = self._translate_to_prometheus(\n export_record\n )\n if prometheus_metric is not None:\n yield prometheus_metric\n\n def _translate_to_prometheus(\n self, metric: Metric\n ) -> Optional[core.Metric]:\n prometheus_metric = None\n label_values = []\n label_keys = []\n for key, value in metric.attributes.items():\n label_keys.append(self._sanitize(key))\n label_values.append(self._check_value(value))\n\n metric_name = \"\"\n if self._prefix != \"\":\n metric_name = self._prefix + \"_\"\n metric_name += self._sanitize(metric.name)\n\n description = metric.description or \"\"\n if isinstance(metric.point, Sum):\n prometheus_metric = core.CounterMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n prometheus_metric.add_metric(\n labels=label_values, value=metric.point.value\n )\n elif isinstance(metric.point, Gauge):\n prometheus_metric = core.GaugeMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n prometheus_metric.add_metric(\n labels=label_values, value=metric.point.value\n )\n elif isinstance(metric.point, Histogram):\n value = metric.point.sum\n prometheus_metric = core.HistogramMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n buckets = _convert_buckets(metric)\n prometheus_metric.add_metric(\n labels=label_values, buckets=buckets, sum_value=value\n )\n else:\n _logger.warning(\"Unsupported metric type. %s\", type(metric.point))\n return prometheus_metric\n\n def _sanitize(self, key: str) -> str:\n \"\"\"sanitize the given metric name or label according to Prometheus rule.\n Replace all characters other than [A-Za-z0-9_] with '_'.\n \"\"\"\n return self._non_letters_digits_underscore_re.sub(\"_\", key)\n\n # pylint: disable=no-self-use\n def _check_value(self, value: Union[int, float, str, Sequence]) -> str:\n \"\"\"Check the label value and return is appropriate representation\"\"\"\n if not isinstance(value, str) and isinstance(value, Sequence):\n return dumps(value, default=str)\n return str(value)\n", "path": "exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py"}]} | 2,781 | 189 |
gh_patches_debug_30826 | rasdani/github-patches | git_diff | freedomofpress__securedrop-4133 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[xenial] Verify Trusty backup -> Xenial recovery story
We should ensure that a SecureDrop backup completed on 14.04 can successfully be recovered on 16.04. Whether we ask admins to perform this step manually, or whether we automate it, it may be a required part of the Xenial migration and will certainly be highly recommended.
If clean upgrades to Xenial are not yet implemented one should complete this ticket by following these steps instead:
1. Create a backup on 14.04 server
2. Create fresh install on 16.04 and then attempt to run the restore
Part of #3204, may result in follow-up issues.
</issue>
<code>
[start of install_files/ansible-base/roles/restore/files/restore.py]
1 #!/usr/bin/python2.7
2 """
3 This script and backup archive should be copied to the App server and run by
4 the Ansible playbook. When run (as root), it restores the contents of the 0.3
5 backup file to the machine it's run on.
6
7 python restore.py sd-backup-TIMESTAMP.tar.gz
8 """
9
10 import os
11 import subprocess
12 import sys
13 import tarfile
14
15
16 def verify_args():
17 usage = """
18 Usage: restore.py <backup file>
19
20 <backup file> Path to a SecureDrop 0.3 backup created by backup.py"
21 """
22 if len(sys.argv) != 2:
23 print(usage)
24 sys.exit(1)
25
26 if not os.path.exists(sys.argv[1]):
27 print("<backup file> '{}' not found".format(sys.argv[1]))
28 sys.exit(1)
29
30 if os.geteuid() != 0:
31 print("This program must be run as root!")
32 sys.exit(1)
33
34
35 def main():
36 verify_args()
37
38 with tarfile.open(sys.argv[1], 'r:*') as backup:
39 # This assumes that both the old installation (source of the backup)
40 # and the new installation (destination of the restore) used the
41 # default paths for various locations.
42 backup.extractall(path='/')
43
44 # Reload Tor and the web server so they pick up the new configuration
45 # If the process exits with a non-zero return code, raises an exception.
46 subprocess.check_call(['service', 'apache2', 'restart'])
47 subprocess.check_call(['service', 'tor', 'reload'])
48 # Apply database migrations (if backed-up version < version to restore)
49 subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])
50
51
52 if __name__ == "__main__":
53 main()
54
[end of install_files/ansible-base/roles/restore/files/restore.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/install_files/ansible-base/roles/restore/files/restore.py b/install_files/ansible-base/roles/restore/files/restore.py
--- a/install_files/ansible-base/roles/restore/files/restore.py
+++ b/install_files/ansible-base/roles/restore/files/restore.py
@@ -8,6 +8,7 @@
"""
import os
+import shutil
import subprocess
import sys
import tarfile
@@ -35,18 +36,29 @@
def main():
verify_args()
+ # Remove the /var/lib/tor/services directories to purge values that may have been
+ # generated by running the ansible playbooks
+ for d in ['journalist', 'source']:
+ full_path = os.path.join('/var/lib/tor/services', d)
+ if os.path.exists(full_path):
+ shutil.rmtree(full_path)
+
with tarfile.open(sys.argv[1], 'r:*') as backup:
# This assumes that both the old installation (source of the backup)
# and the new installation (destination of the restore) used the
# default paths for various locations.
backup.extractall(path='/')
+ # Apply database migrations (if backed-up version < version to restore)
+ subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])
+
+ # Update the configs
+ subprocess.check_call(['dpkg-reconfigure', 'securedrop-config'])
+
# Reload Tor and the web server so they pick up the new configuration
# If the process exits with a non-zero return code, raises an exception.
subprocess.check_call(['service', 'apache2', 'restart'])
subprocess.check_call(['service', 'tor', 'reload'])
- # Apply database migrations (if backed-up version < version to restore)
- subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])
if __name__ == "__main__":
| {"golden_diff": "diff --git a/install_files/ansible-base/roles/restore/files/restore.py b/install_files/ansible-base/roles/restore/files/restore.py\n--- a/install_files/ansible-base/roles/restore/files/restore.py\n+++ b/install_files/ansible-base/roles/restore/files/restore.py\n@@ -8,6 +8,7 @@\n \"\"\"\n \n import os\n+import shutil\n import subprocess\n import sys\n import tarfile\n@@ -35,18 +36,29 @@\n def main():\n verify_args()\n \n+ # Remove the /var/lib/tor/services directories to purge values that may have been\n+ # generated by running the ansible playbooks\n+ for d in ['journalist', 'source']:\n+ full_path = os.path.join('/var/lib/tor/services', d)\n+ if os.path.exists(full_path):\n+ shutil.rmtree(full_path)\n+\n with tarfile.open(sys.argv[1], 'r:*') as backup:\n # This assumes that both the old installation (source of the backup)\n # and the new installation (destination of the restore) used the\n # default paths for various locations.\n backup.extractall(path='/')\n \n+ # Apply database migrations (if backed-up version < version to restore)\n+ subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])\n+\n+ # Update the configs\n+ subprocess.check_call(['dpkg-reconfigure', 'securedrop-config'])\n+\n # Reload Tor and the web server so they pick up the new configuration\n # If the process exits with a non-zero return code, raises an exception.\n subprocess.check_call(['service', 'apache2', 'restart'])\n subprocess.check_call(['service', 'tor', 'reload'])\n- # Apply database migrations (if backed-up version < version to restore)\n- subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])\n \n \n if __name__ == \"__main__\":\n", "issue": "[xenial] Verify Trusty backup -> Xenial recovery story\nWe should ensure that a SecureDrop backup completed on 14.04 can successfully be recovered on 16.04. Whether we ask admins to perform this step manually, or whether we automate it, it may be a required part of the Xenial migration and will certainly be highly recommended.\r\n\r\nIf clean upgrades to Xenial are not yet implemented one should complete this ticket by following these steps instead:\r\n\r\n1. Create a backup on 14.04 server\r\n2. Create fresh install on 16.04 and then attempt to run the restore\r\n\r\nPart of #3204, may result in follow-up issues.\n", "before_files": [{"content": "#!/usr/bin/python2.7\n\"\"\"\nThis script and backup archive should be copied to the App server and run by\nthe Ansible playbook. When run (as root), it restores the contents of the 0.3\nbackup file to the machine it's run on.\n\npython restore.py sd-backup-TIMESTAMP.tar.gz\n\"\"\"\n\nimport os\nimport subprocess\nimport sys\nimport tarfile\n\n\ndef verify_args():\n usage = \"\"\"\nUsage: restore.py <backup file>\n\n <backup file> Path to a SecureDrop 0.3 backup created by backup.py\"\n \"\"\"\n if len(sys.argv) != 2:\n print(usage)\n sys.exit(1)\n\n if not os.path.exists(sys.argv[1]):\n print(\"<backup file> '{}' not found\".format(sys.argv[1]))\n sys.exit(1)\n\n if os.geteuid() != 0:\n print(\"This program must be run as root!\")\n sys.exit(1)\n\n\ndef main():\n verify_args()\n\n with tarfile.open(sys.argv[1], 'r:*') as backup:\n # This assumes that both the old installation (source of the backup)\n # and the new installation (destination of the restore) used the\n # default paths for various locations.\n backup.extractall(path='/')\n\n # Reload Tor and the web server so they pick up the new configuration\n # If the process exits with a non-zero return code, raises an exception.\n subprocess.check_call(['service', 'apache2', 'restart'])\n subprocess.check_call(['service', 'tor', 'reload'])\n # Apply database migrations (if backed-up version < version to restore)\n subprocess.check_call(['dpkg-reconfigure', 'securedrop-app-code'])\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install_files/ansible-base/roles/restore/files/restore.py"}]} | 1,185 | 415 |
gh_patches_debug_30080 | rasdani/github-patches | git_diff | zulip__zulip-21977 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update "notifications language" setting to use the "Default language" picker modal
The settings UI for picking the "notifications language" (previously "Default language for new users"; see #20866) should use the much nicer language picker component that we have for an individual user's language setting (i.e. this, rather than the simple dropdown).

I haven't looked at how complex this is, but it seems clearly better to reuse that component.
</issue>
<code>
[start of version.py]
1 import os
2
3 ZULIP_VERSION = "6.0-dev+git"
4
5 # Add information on number of commits and commit hash to version, if available
6 zulip_git_version_file = os.path.join(
7 os.path.dirname(os.path.abspath(__file__)), "zulip-git-version"
8 )
9 lines = [ZULIP_VERSION, ""]
10 if os.path.exists(zulip_git_version_file):
11 with open(zulip_git_version_file) as f:
12 lines = f.readlines() + ["", ""]
13 ZULIP_VERSION = lines.pop(0).strip()
14 ZULIP_MERGE_BASE = lines.pop(0).strip()
15
16 LATEST_MAJOR_VERSION = "5.0"
17 LATEST_RELEASE_VERSION = "5.2"
18 LATEST_RELEASE_ANNOUNCEMENT = "https://blog.zulip.com/2022/03/29/zulip-5-0-released/"
19
20 # Versions of the desktop app below DESKTOP_MINIMUM_VERSION will be
21 # prevented from connecting to the Zulip server. Versions above
22 # DESKTOP_MINIMUM_VERSION but below DESKTOP_WARNING_VERSION will have
23 # a banner at the top of the page asking the user to upgrade.
24 DESKTOP_MINIMUM_VERSION = "5.2.0"
25 DESKTOP_WARNING_VERSION = "5.4.3"
26
27 # Bump the API_FEATURE_LEVEL whenever an API change is made
28 # that clients might want to condition on. If we forget at
29 # the time we make the change, then bump it later as soon
30 # as we notice; clients using API_FEATURE_LEVEL will just not
31 # use the new feature/API until the bump.
32 #
33 # Changes should be accompanied by documentation explaining what the
34 # new level means in templates/zerver/api/changelog.md, as well as
35 # "**Changes**" entries in the endpoint's documentation in `zulip.yaml`.
36 API_FEATURE_LEVEL = 132
37
38 # Bump the minor PROVISION_VERSION to indicate that folks should provision
39 # only when going from an old version of the code to a newer version. Bump
40 # the major version to indicate that folks should provision in both
41 # directions.
42
43 # Typically,
44 # * adding a dependency only requires a minor version bump;
45 # * removing a dependency requires a major version bump;
46 # * upgrading a dependency requires a major version bump, unless the
47 # upgraded dependency is backwards compatible with all of our
48 # historical commits sharing the same major version, in which case a
49 # minor version bump suffices.
50
51 PROVISION_VERSION = "190.0"
52
[end of version.py]
[start of zerver/lib/home.py]
1 import calendar
2 import time
3 from dataclasses import dataclass
4 from typing import Any, Dict, List, Optional, Tuple
5
6 from django.conf import settings
7 from django.http import HttpRequest
8 from django.utils import translation
9 from two_factor.utils import default_device
10
11 from zerver.context_processors import get_apps_page_url
12 from zerver.lib.events import do_events_register
13 from zerver.lib.i18n import (
14 get_and_set_request_language,
15 get_language_list,
16 get_language_translation_data,
17 )
18 from zerver.lib.realm_description import get_realm_rendered_description
19 from zerver.lib.request import RequestNotes
20 from zerver.models import Message, Realm, Stream, UserProfile
21 from zerver.views.message_flags import get_latest_update_message_flag_activity
22
23
24 @dataclass
25 class BillingInfo:
26 show_billing: bool
27 show_plans: bool
28
29
30 @dataclass
31 class UserPermissionInfo:
32 color_scheme: int
33 is_guest: bool
34 is_realm_admin: bool
35 is_realm_owner: bool
36 show_webathena: bool
37
38
39 def get_furthest_read_time(user_profile: Optional[UserProfile]) -> Optional[float]:
40 if user_profile is None:
41 return time.time()
42
43 user_activity = get_latest_update_message_flag_activity(user_profile)
44 if user_activity is None:
45 return None
46
47 return calendar.timegm(user_activity.last_visit.utctimetuple())
48
49
50 def get_bot_types(user_profile: Optional[UserProfile]) -> List[Dict[str, object]]:
51 bot_types: List[Dict[str, object]] = []
52 if user_profile is None:
53 return bot_types
54
55 for type_id, name in UserProfile.BOT_TYPES.items():
56 bot_types.append(
57 dict(
58 type_id=type_id,
59 name=name,
60 allowed=type_id in user_profile.allowed_bot_types,
61 )
62 )
63 return bot_types
64
65
66 def promote_sponsoring_zulip_in_realm(realm: Realm) -> bool:
67 if not settings.PROMOTE_SPONSORING_ZULIP:
68 return False
69
70 # If PROMOTE_SPONSORING_ZULIP is enabled, advertise sponsoring
71 # Zulip in the gear menu of non-paying organizations.
72 return realm.plan_type in [Realm.PLAN_TYPE_STANDARD_FREE, Realm.PLAN_TYPE_SELF_HOSTED]
73
74
75 def get_billing_info(user_profile: Optional[UserProfile]) -> BillingInfo:
76 show_billing = False
77 show_plans = False
78 if settings.CORPORATE_ENABLED and user_profile is not None:
79 if user_profile.has_billing_access:
80 from corporate.models import CustomerPlan, get_customer_by_realm
81
82 customer = get_customer_by_realm(user_profile.realm)
83 if customer is not None:
84 if customer.sponsorship_pending:
85 show_billing = True
86 elif CustomerPlan.objects.filter(customer=customer).exists():
87 show_billing = True
88
89 if not user_profile.is_guest and user_profile.realm.plan_type == Realm.PLAN_TYPE_LIMITED:
90 show_plans = True
91
92 return BillingInfo(
93 show_billing=show_billing,
94 show_plans=show_plans,
95 )
96
97
98 def get_user_permission_info(user_profile: Optional[UserProfile]) -> UserPermissionInfo:
99 if user_profile is not None:
100 return UserPermissionInfo(
101 color_scheme=user_profile.color_scheme,
102 is_guest=user_profile.is_guest,
103 is_realm_owner=user_profile.is_realm_owner,
104 is_realm_admin=user_profile.is_realm_admin,
105 show_webathena=user_profile.realm.webathena_enabled,
106 )
107 else:
108 return UserPermissionInfo(
109 color_scheme=UserProfile.COLOR_SCHEME_AUTOMATIC,
110 is_guest=False,
111 is_realm_admin=False,
112 is_realm_owner=False,
113 show_webathena=False,
114 )
115
116
117 def build_page_params_for_home_page_load(
118 request: HttpRequest,
119 user_profile: Optional[UserProfile],
120 realm: Realm,
121 insecure_desktop_app: bool,
122 narrow: List[List[str]],
123 narrow_stream: Optional[Stream],
124 narrow_topic: Optional[str],
125 first_in_realm: bool,
126 prompt_for_invites: bool,
127 needs_tutorial: bool,
128 ) -> Tuple[int, Dict[str, Any]]:
129 """
130 This function computes page_params for when we load the home page.
131
132 The page_params data structure gets sent to the client.
133 """
134 client_capabilities = {
135 "notification_settings_null": True,
136 "bulk_message_deletion": True,
137 "user_avatar_url_field_optional": True,
138 "stream_typing_notifications": False, # Set this to True when frontend support is implemented.
139 "user_settings_object": True,
140 }
141
142 if user_profile is not None:
143 client = RequestNotes.get_notes(request).client
144 assert client is not None
145 register_ret = do_events_register(
146 user_profile,
147 realm,
148 client,
149 apply_markdown=True,
150 client_gravatar=True,
151 slim_presence=True,
152 client_capabilities=client_capabilities,
153 narrow=narrow,
154 include_streams=False,
155 )
156 default_language = register_ret["user_settings"]["default_language"]
157 else:
158 # The spectator client will be fetching the /register response
159 # for spectators via the API. But we still need to set the
160 # values not presence in that object.
161 register_ret = {
162 "queue_id": None,
163 }
164 default_language = realm.default_language
165
166 furthest_read_time = get_furthest_read_time(user_profile)
167
168 request_language = get_and_set_request_language(
169 request,
170 default_language,
171 translation.get_language_from_path(request.path_info),
172 )
173
174 two_fa_enabled = settings.TWO_FACTOR_AUTHENTICATION_ENABLED and user_profile is not None
175 billing_info = get_billing_info(user_profile)
176 user_permission_info = get_user_permission_info(user_profile)
177
178 # Pass parameters to the client-side JavaScript code.
179 # These end up in a JavaScript Object named 'page_params'.
180 page_params = dict(
181 ## Server settings.
182 test_suite=settings.TEST_SUITE,
183 insecure_desktop_app=insecure_desktop_app,
184 login_page=settings.HOME_NOT_LOGGED_IN,
185 warn_no_email=settings.WARN_NO_EMAIL,
186 search_pills_enabled=settings.SEARCH_PILLS_ENABLED,
187 # Only show marketing email settings if on Zulip Cloud
188 corporate_enabled=settings.CORPORATE_ENABLED,
189 ## Misc. extra data.
190 language_list=get_language_list(),
191 needs_tutorial=needs_tutorial,
192 first_in_realm=first_in_realm,
193 prompt_for_invites=prompt_for_invites,
194 furthest_read_time=furthest_read_time,
195 bot_types=get_bot_types(user_profile),
196 two_fa_enabled=two_fa_enabled,
197 apps_page_url=get_apps_page_url(),
198 show_billing=billing_info.show_billing,
199 promote_sponsoring_zulip=promote_sponsoring_zulip_in_realm(realm),
200 show_plans=billing_info.show_plans,
201 show_webathena=user_permission_info.show_webathena,
202 # Adding two_fa_enabled as condition saves us 3 queries when
203 # 2FA is not enabled.
204 two_fa_enabled_user=two_fa_enabled and bool(default_device(user_profile)),
205 is_spectator=user_profile is None,
206 # There is no event queue for spectators since
207 # events support for spectators is not implemented yet.
208 no_event_queue=user_profile is None,
209 )
210
211 for field_name in register_ret.keys():
212 page_params[field_name] = register_ret[field_name]
213
214 if narrow_stream is not None:
215 # In narrow_stream context, initial pointer is just latest message
216 recipient = narrow_stream.recipient
217 try:
218 max_message_id = (
219 Message.objects.filter(recipient=recipient).order_by("id").reverse()[0].id
220 )
221 except IndexError:
222 max_message_id = -1
223 page_params["narrow_stream"] = narrow_stream.name
224 if narrow_topic is not None:
225 page_params["narrow_topic"] = narrow_topic
226 page_params["narrow"] = [dict(operator=term[0], operand=term[1]) for term in narrow]
227 page_params["max_message_id"] = max_message_id
228 assert isinstance(page_params["user_settings"], dict)
229 page_params["user_settings"]["enable_desktop_notifications"] = False
230
231 page_params["translation_data"] = get_language_translation_data(request_language)
232
233 if user_profile is None:
234 # Get rendered version of realm description which is displayed in right
235 # sidebar for spectator.
236 page_params["realm_rendered_description"] = get_realm_rendered_description(realm)
237
238 return register_ret["queue_id"], page_params
239
[end of zerver/lib/home.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/version.py b/version.py
--- a/version.py
+++ b/version.py
@@ -48,4 +48,4 @@
# historical commits sharing the same major version, in which case a
# minor version bump suffices.
-PROVISION_VERSION = "190.0"
+PROVISION_VERSION = "190.1"
diff --git a/zerver/lib/home.py b/zerver/lib/home.py
--- a/zerver/lib/home.py
+++ b/zerver/lib/home.py
@@ -163,14 +163,16 @@
}
default_language = realm.default_language
- furthest_read_time = get_furthest_read_time(user_profile)
-
- request_language = get_and_set_request_language(
- request,
- default_language,
- translation.get_language_from_path(request.path_info),
- )
+ if user_profile is None:
+ request_language = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME, default_language)
+ else:
+ request_language = get_and_set_request_language(
+ request,
+ default_language,
+ translation.get_language_from_path(request.path_info),
+ )
+ furthest_read_time = get_furthest_read_time(user_profile)
two_fa_enabled = settings.TWO_FACTOR_AUTHENTICATION_ENABLED and user_profile is not None
billing_info = get_billing_info(user_profile)
user_permission_info = get_user_permission_info(user_profile)
@@ -234,5 +236,6 @@
# Get rendered version of realm description which is displayed in right
# sidebar for spectator.
page_params["realm_rendered_description"] = get_realm_rendered_description(realm)
+ page_params["language_cookie_name"] = settings.LANGUAGE_COOKIE_NAME
return register_ret["queue_id"], page_params
| {"golden_diff": "diff --git a/version.py b/version.py\n--- a/version.py\n+++ b/version.py\n@@ -48,4 +48,4 @@\n # historical commits sharing the same major version, in which case a\n # minor version bump suffices.\n \n-PROVISION_VERSION = \"190.0\"\n+PROVISION_VERSION = \"190.1\"\ndiff --git a/zerver/lib/home.py b/zerver/lib/home.py\n--- a/zerver/lib/home.py\n+++ b/zerver/lib/home.py\n@@ -163,14 +163,16 @@\n }\n default_language = realm.default_language\n \n- furthest_read_time = get_furthest_read_time(user_profile)\n-\n- request_language = get_and_set_request_language(\n- request,\n- default_language,\n- translation.get_language_from_path(request.path_info),\n- )\n+ if user_profile is None:\n+ request_language = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME, default_language)\n+ else:\n+ request_language = get_and_set_request_language(\n+ request,\n+ default_language,\n+ translation.get_language_from_path(request.path_info),\n+ )\n \n+ furthest_read_time = get_furthest_read_time(user_profile)\n two_fa_enabled = settings.TWO_FACTOR_AUTHENTICATION_ENABLED and user_profile is not None\n billing_info = get_billing_info(user_profile)\n user_permission_info = get_user_permission_info(user_profile)\n@@ -234,5 +236,6 @@\n # Get rendered version of realm description which is displayed in right\n # sidebar for spectator.\n page_params[\"realm_rendered_description\"] = get_realm_rendered_description(realm)\n+ page_params[\"language_cookie_name\"] = settings.LANGUAGE_COOKIE_NAME\n \n return register_ret[\"queue_id\"], page_params\n", "issue": "Update \"notifications language\" setting to use the \"Default language\" picker modal\nThe settings UI for picking the \"notifications language\" (previously \"Default language for new users\"; see #20866) should use the much nicer language picker component that we have for an individual user's language setting (i.e. this, rather than the simple dropdown).\r\n\r\n\r\n\r\nI haven't looked at how complex this is, but it seems clearly better to reuse that component.\n", "before_files": [{"content": "import os\n\nZULIP_VERSION = \"6.0-dev+git\"\n\n# Add information on number of commits and commit hash to version, if available\nzulip_git_version_file = os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"zulip-git-version\"\n)\nlines = [ZULIP_VERSION, \"\"]\nif os.path.exists(zulip_git_version_file):\n with open(zulip_git_version_file) as f:\n lines = f.readlines() + [\"\", \"\"]\nZULIP_VERSION = lines.pop(0).strip()\nZULIP_MERGE_BASE = lines.pop(0).strip()\n\nLATEST_MAJOR_VERSION = \"5.0\"\nLATEST_RELEASE_VERSION = \"5.2\"\nLATEST_RELEASE_ANNOUNCEMENT = \"https://blog.zulip.com/2022/03/29/zulip-5-0-released/\"\n\n# Versions of the desktop app below DESKTOP_MINIMUM_VERSION will be\n# prevented from connecting to the Zulip server. Versions above\n# DESKTOP_MINIMUM_VERSION but below DESKTOP_WARNING_VERSION will have\n# a banner at the top of the page asking the user to upgrade.\nDESKTOP_MINIMUM_VERSION = \"5.2.0\"\nDESKTOP_WARNING_VERSION = \"5.4.3\"\n\n# Bump the API_FEATURE_LEVEL whenever an API change is made\n# that clients might want to condition on. If we forget at\n# the time we make the change, then bump it later as soon\n# as we notice; clients using API_FEATURE_LEVEL will just not\n# use the new feature/API until the bump.\n#\n# Changes should be accompanied by documentation explaining what the\n# new level means in templates/zerver/api/changelog.md, as well as\n# \"**Changes**\" entries in the endpoint's documentation in `zulip.yaml`.\nAPI_FEATURE_LEVEL = 132\n\n# Bump the minor PROVISION_VERSION to indicate that folks should provision\n# only when going from an old version of the code to a newer version. Bump\n# the major version to indicate that folks should provision in both\n# directions.\n\n# Typically,\n# * adding a dependency only requires a minor version bump;\n# * removing a dependency requires a major version bump;\n# * upgrading a dependency requires a major version bump, unless the\n# upgraded dependency is backwards compatible with all of our\n# historical commits sharing the same major version, in which case a\n# minor version bump suffices.\n\nPROVISION_VERSION = \"190.0\"\n", "path": "version.py"}, {"content": "import calendar\nimport time\nfrom dataclasses import dataclass\nfrom typing import Any, Dict, List, Optional, Tuple\n\nfrom django.conf import settings\nfrom django.http import HttpRequest\nfrom django.utils import translation\nfrom two_factor.utils import default_device\n\nfrom zerver.context_processors import get_apps_page_url\nfrom zerver.lib.events import do_events_register\nfrom zerver.lib.i18n import (\n get_and_set_request_language,\n get_language_list,\n get_language_translation_data,\n)\nfrom zerver.lib.realm_description import get_realm_rendered_description\nfrom zerver.lib.request import RequestNotes\nfrom zerver.models import Message, Realm, Stream, UserProfile\nfrom zerver.views.message_flags import get_latest_update_message_flag_activity\n\n\n@dataclass\nclass BillingInfo:\n show_billing: bool\n show_plans: bool\n\n\n@dataclass\nclass UserPermissionInfo:\n color_scheme: int\n is_guest: bool\n is_realm_admin: bool\n is_realm_owner: bool\n show_webathena: bool\n\n\ndef get_furthest_read_time(user_profile: Optional[UserProfile]) -> Optional[float]:\n if user_profile is None:\n return time.time()\n\n user_activity = get_latest_update_message_flag_activity(user_profile)\n if user_activity is None:\n return None\n\n return calendar.timegm(user_activity.last_visit.utctimetuple())\n\n\ndef get_bot_types(user_profile: Optional[UserProfile]) -> List[Dict[str, object]]:\n bot_types: List[Dict[str, object]] = []\n if user_profile is None:\n return bot_types\n\n for type_id, name in UserProfile.BOT_TYPES.items():\n bot_types.append(\n dict(\n type_id=type_id,\n name=name,\n allowed=type_id in user_profile.allowed_bot_types,\n )\n )\n return bot_types\n\n\ndef promote_sponsoring_zulip_in_realm(realm: Realm) -> bool:\n if not settings.PROMOTE_SPONSORING_ZULIP:\n return False\n\n # If PROMOTE_SPONSORING_ZULIP is enabled, advertise sponsoring\n # Zulip in the gear menu of non-paying organizations.\n return realm.plan_type in [Realm.PLAN_TYPE_STANDARD_FREE, Realm.PLAN_TYPE_SELF_HOSTED]\n\n\ndef get_billing_info(user_profile: Optional[UserProfile]) -> BillingInfo:\n show_billing = False\n show_plans = False\n if settings.CORPORATE_ENABLED and user_profile is not None:\n if user_profile.has_billing_access:\n from corporate.models import CustomerPlan, get_customer_by_realm\n\n customer = get_customer_by_realm(user_profile.realm)\n if customer is not None:\n if customer.sponsorship_pending:\n show_billing = True\n elif CustomerPlan.objects.filter(customer=customer).exists():\n show_billing = True\n\n if not user_profile.is_guest and user_profile.realm.plan_type == Realm.PLAN_TYPE_LIMITED:\n show_plans = True\n\n return BillingInfo(\n show_billing=show_billing,\n show_plans=show_plans,\n )\n\n\ndef get_user_permission_info(user_profile: Optional[UserProfile]) -> UserPermissionInfo:\n if user_profile is not None:\n return UserPermissionInfo(\n color_scheme=user_profile.color_scheme,\n is_guest=user_profile.is_guest,\n is_realm_owner=user_profile.is_realm_owner,\n is_realm_admin=user_profile.is_realm_admin,\n show_webathena=user_profile.realm.webathena_enabled,\n )\n else:\n return UserPermissionInfo(\n color_scheme=UserProfile.COLOR_SCHEME_AUTOMATIC,\n is_guest=False,\n is_realm_admin=False,\n is_realm_owner=False,\n show_webathena=False,\n )\n\n\ndef build_page_params_for_home_page_load(\n request: HttpRequest,\n user_profile: Optional[UserProfile],\n realm: Realm,\n insecure_desktop_app: bool,\n narrow: List[List[str]],\n narrow_stream: Optional[Stream],\n narrow_topic: Optional[str],\n first_in_realm: bool,\n prompt_for_invites: bool,\n needs_tutorial: bool,\n) -> Tuple[int, Dict[str, Any]]:\n \"\"\"\n This function computes page_params for when we load the home page.\n\n The page_params data structure gets sent to the client.\n \"\"\"\n client_capabilities = {\n \"notification_settings_null\": True,\n \"bulk_message_deletion\": True,\n \"user_avatar_url_field_optional\": True,\n \"stream_typing_notifications\": False, # Set this to True when frontend support is implemented.\n \"user_settings_object\": True,\n }\n\n if user_profile is not None:\n client = RequestNotes.get_notes(request).client\n assert client is not None\n register_ret = do_events_register(\n user_profile,\n realm,\n client,\n apply_markdown=True,\n client_gravatar=True,\n slim_presence=True,\n client_capabilities=client_capabilities,\n narrow=narrow,\n include_streams=False,\n )\n default_language = register_ret[\"user_settings\"][\"default_language\"]\n else:\n # The spectator client will be fetching the /register response\n # for spectators via the API. But we still need to set the\n # values not presence in that object.\n register_ret = {\n \"queue_id\": None,\n }\n default_language = realm.default_language\n\n furthest_read_time = get_furthest_read_time(user_profile)\n\n request_language = get_and_set_request_language(\n request,\n default_language,\n translation.get_language_from_path(request.path_info),\n )\n\n two_fa_enabled = settings.TWO_FACTOR_AUTHENTICATION_ENABLED and user_profile is not None\n billing_info = get_billing_info(user_profile)\n user_permission_info = get_user_permission_info(user_profile)\n\n # Pass parameters to the client-side JavaScript code.\n # These end up in a JavaScript Object named 'page_params'.\n page_params = dict(\n ## Server settings.\n test_suite=settings.TEST_SUITE,\n insecure_desktop_app=insecure_desktop_app,\n login_page=settings.HOME_NOT_LOGGED_IN,\n warn_no_email=settings.WARN_NO_EMAIL,\n search_pills_enabled=settings.SEARCH_PILLS_ENABLED,\n # Only show marketing email settings if on Zulip Cloud\n corporate_enabled=settings.CORPORATE_ENABLED,\n ## Misc. extra data.\n language_list=get_language_list(),\n needs_tutorial=needs_tutorial,\n first_in_realm=first_in_realm,\n prompt_for_invites=prompt_for_invites,\n furthest_read_time=furthest_read_time,\n bot_types=get_bot_types(user_profile),\n two_fa_enabled=two_fa_enabled,\n apps_page_url=get_apps_page_url(),\n show_billing=billing_info.show_billing,\n promote_sponsoring_zulip=promote_sponsoring_zulip_in_realm(realm),\n show_plans=billing_info.show_plans,\n show_webathena=user_permission_info.show_webathena,\n # Adding two_fa_enabled as condition saves us 3 queries when\n # 2FA is not enabled.\n two_fa_enabled_user=two_fa_enabled and bool(default_device(user_profile)),\n is_spectator=user_profile is None,\n # There is no event queue for spectators since\n # events support for spectators is not implemented yet.\n no_event_queue=user_profile is None,\n )\n\n for field_name in register_ret.keys():\n page_params[field_name] = register_ret[field_name]\n\n if narrow_stream is not None:\n # In narrow_stream context, initial pointer is just latest message\n recipient = narrow_stream.recipient\n try:\n max_message_id = (\n Message.objects.filter(recipient=recipient).order_by(\"id\").reverse()[0].id\n )\n except IndexError:\n max_message_id = -1\n page_params[\"narrow_stream\"] = narrow_stream.name\n if narrow_topic is not None:\n page_params[\"narrow_topic\"] = narrow_topic\n page_params[\"narrow\"] = [dict(operator=term[0], operand=term[1]) for term in narrow]\n page_params[\"max_message_id\"] = max_message_id\n assert isinstance(page_params[\"user_settings\"], dict)\n page_params[\"user_settings\"][\"enable_desktop_notifications\"] = False\n\n page_params[\"translation_data\"] = get_language_translation_data(request_language)\n\n if user_profile is None:\n # Get rendered version of realm description which is displayed in right\n # sidebar for spectator.\n page_params[\"realm_rendered_description\"] = get_realm_rendered_description(realm)\n\n return register_ret[\"queue_id\"], page_params\n", "path": "zerver/lib/home.py"}]} | 3,788 | 392 |
gh_patches_debug_34300 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-142 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Admin UI suggestions
As I've been using the admin UI a bit for my class, I just had a few things that I thought would be nice to have/change in it:
- move the "add user" button to the top -- it is annoying with lots of users to have to scroll all the way down to find it
- add some simple statistics at the top of the page: number of running servers, number of currently active users, etc.
- it would be awesome to be able to sort users by the different columns -- admin, alphabetically, by last seen
- currently, if you shut down a user's server, it causes the page to refresh which often jumps you up to the top (or just somewhere else). It would be nice if the update could be done in the background without actually reloading the page.
Obviously, none of these are urgent, but I think they would make the admin experience a little easier. I can open separate issues for them if so desired.
</issue>
<code>
[start of jupyterhub/handlers/pages.py]
1 """Basic html-rendering handlers."""
2
3 # Copyright (c) Jupyter Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from tornado import web
7
8 from .. import orm
9 from ..utils import admin_only, url_path_join
10 from .base import BaseHandler
11
12
13 class RootHandler(BaseHandler):
14 """Render the Hub root page.
15
16 Currently redirects to home if logged in,
17 shows big fat login button otherwise.
18 """
19 def get(self):
20 if self.get_current_user():
21 self.redirect(
22 url_path_join(self.hub.server.base_url, 'home'),
23 permanent=False,
24 )
25 return
26
27 html = self.render_template('index.html',
28 login_url=self.settings['login_url'],
29 )
30 self.finish(html)
31
32 class HomeHandler(BaseHandler):
33 """Render the user's home page."""
34
35 @web.authenticated
36 def get(self):
37 html = self.render_template('home.html',
38 user=self.get_current_user(),
39 )
40 self.finish(html)
41
42
43 class AdminHandler(BaseHandler):
44 """Render the admin page."""
45
46 @admin_only
47 def get(self):
48 html = self.render_template('admin.html',
49 user=self.get_current_user(),
50 users=self.db.query(orm.User),
51 admin_access=self.settings.get('admin_access', False),
52 )
53 self.finish(html)
54
55
56 default_handlers = [
57 (r'/', RootHandler),
58 (r'/home', HomeHandler),
59 (r'/admin', AdminHandler),
60 ]
61
[end of jupyterhub/handlers/pages.py]
[start of jupyterhub/apihandlers/base.py]
1 """Base API handlers"""
2 # Copyright (c) Jupyter Development Team.
3 # Distributed under the terms of the Modified BSD License.
4
5 import json
6
7 from http.client import responses
8
9 from tornado import web
10
11 from ..handlers import BaseHandler
12
13 class APIHandler(BaseHandler):
14 def get_json_body(self):
15 """Return the body of the request as JSON data."""
16 if not self.request.body:
17 return None
18 body = self.request.body.strip().decode('utf-8')
19 try:
20 model = json.loads(body)
21 except Exception:
22 self.log.debug("Bad JSON: %r", body)
23 self.log.error("Couldn't parse JSON", exc_info=True)
24 raise web.HTTPError(400, 'Invalid JSON in body of request')
25 return model
26
27
28 def write_error(self, status_code, **kwargs):
29 """Write JSON errors instead of HTML"""
30 exc_info = kwargs.get('exc_info')
31 message = ''
32 status_message = responses.get(status_code, 'Unknown Error')
33 if exc_info:
34 exception = exc_info[1]
35 # get the custom message, if defined
36 try:
37 message = exception.log_message % exception.args
38 except Exception:
39 pass
40
41 # construct the custom reason, if defined
42 reason = getattr(exception, 'reason', '')
43 if reason:
44 status_message = reason
45
46 self.write(json.dumps({
47 'status': status_code,
48 'message': message or status_message,
49 }))
50
[end of jupyterhub/apihandlers/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jupyterhub/apihandlers/base.py b/jupyterhub/apihandlers/base.py
--- a/jupyterhub/apihandlers/base.py
+++ b/jupyterhub/apihandlers/base.py
@@ -42,7 +42,7 @@
reason = getattr(exception, 'reason', '')
if reason:
status_message = reason
-
+ self.set_header('Content-Type', 'application/json')
self.write(json.dumps({
'status': status_code,
'message': message or status_message,
diff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py
--- a/jupyterhub/handlers/pages.py
+++ b/jupyterhub/handlers/pages.py
@@ -45,10 +45,52 @@
@admin_only
def get(self):
+ available = {'name', 'admin', 'running', 'last_activity'}
+ default_sort = ['admin', 'name']
+ mapping = {
+ 'running': '_server_id'
+ }
+ default_order = {
+ 'name': 'asc',
+ 'last_activity': 'desc',
+ 'admin': 'desc',
+ 'running': 'desc',
+ }
+ sorts = self.get_arguments('sort') or default_sort
+ orders = self.get_arguments('order')
+
+ for bad in set(sorts).difference(available):
+ self.log.warn("ignoring invalid sort: %r", bad)
+ sorts.remove(bad)
+ for bad in set(orders).difference({'asc', 'desc'}):
+ self.log.warn("ignoring invalid order: %r", bad)
+ orders.remove(bad)
+
+ # add default sort as secondary
+ for s in default_sort:
+ if s not in sorts:
+ sorts.append(s)
+ if len(orders) < len(sorts):
+ for col in sorts[len(orders):]:
+ orders.append(default_order[col])
+ else:
+ orders = orders[:len(sorts)]
+
+ # this could be one incomprehensible nested list comprehension
+ # get User columns
+ cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]
+ # get User.col.desc() order objects
+ ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]
+
+ users = self.db.query(orm.User).order_by(*ordered)
+ running = users.filter(orm.User.server != None)
+
html = self.render_template('admin.html',
user=self.get_current_user(),
- users=self.db.query(orm.User),
admin_access=self.settings.get('admin_access', False),
+ users=users,
+ running=running,
+ sort={s:o for s,o in zip(sorts, orders)},
)
self.finish(html)
| {"golden_diff": "diff --git a/jupyterhub/apihandlers/base.py b/jupyterhub/apihandlers/base.py\n--- a/jupyterhub/apihandlers/base.py\n+++ b/jupyterhub/apihandlers/base.py\n@@ -42,7 +42,7 @@\n reason = getattr(exception, 'reason', '')\n if reason:\n status_message = reason\n- \n+ self.set_header('Content-Type', 'application/json')\n self.write(json.dumps({\n 'status': status_code,\n 'message': message or status_message,\ndiff --git a/jupyterhub/handlers/pages.py b/jupyterhub/handlers/pages.py\n--- a/jupyterhub/handlers/pages.py\n+++ b/jupyterhub/handlers/pages.py\n@@ -45,10 +45,52 @@\n \n @admin_only\n def get(self):\n+ available = {'name', 'admin', 'running', 'last_activity'}\n+ default_sort = ['admin', 'name']\n+ mapping = {\n+ 'running': '_server_id'\n+ }\n+ default_order = {\n+ 'name': 'asc',\n+ 'last_activity': 'desc',\n+ 'admin': 'desc',\n+ 'running': 'desc',\n+ }\n+ sorts = self.get_arguments('sort') or default_sort\n+ orders = self.get_arguments('order')\n+ \n+ for bad in set(sorts).difference(available):\n+ self.log.warn(\"ignoring invalid sort: %r\", bad)\n+ sorts.remove(bad)\n+ for bad in set(orders).difference({'asc', 'desc'}):\n+ self.log.warn(\"ignoring invalid order: %r\", bad)\n+ orders.remove(bad)\n+ \n+ # add default sort as secondary\n+ for s in default_sort:\n+ if s not in sorts:\n+ sorts.append(s)\n+ if len(orders) < len(sorts):\n+ for col in sorts[len(orders):]:\n+ orders.append(default_order[col])\n+ else:\n+ orders = orders[:len(sorts)]\n+ \n+ # this could be one incomprehensible nested list comprehension\n+ # get User columns\n+ cols = [ getattr(orm.User, mapping.get(c, c)) for c in sorts ]\n+ # get User.col.desc() order objects\n+ ordered = [ getattr(c, o)() for c, o in zip(cols, orders) ]\n+ \n+ users = self.db.query(orm.User).order_by(*ordered)\n+ running = users.filter(orm.User.server != None)\n+ \n html = self.render_template('admin.html',\n user=self.get_current_user(),\n- users=self.db.query(orm.User),\n admin_access=self.settings.get('admin_access', False),\n+ users=users,\n+ running=running,\n+ sort={s:o for s,o in zip(sorts, orders)},\n )\n self.finish(html)\n", "issue": "Admin UI suggestions\nAs I've been using the admin UI a bit for my class, I just had a few things that I thought would be nice to have/change in it:\n- move the \"add user\" button to the top -- it is annoying with lots of users to have to scroll all the way down to find it\n- add some simple statistics at the top of the page: number of running servers, number of currently active users, etc.\n- it would be awesome to be able to sort users by the different columns -- admin, alphabetically, by last seen\n- currently, if you shut down a user's server, it causes the page to refresh which often jumps you up to the top (or just somewhere else). It would be nice if the update could be done in the background without actually reloading the page.\n\nObviously, none of these are urgent, but I think they would make the admin experience a little easier. I can open separate issues for them if so desired.\n\n", "before_files": [{"content": "\"\"\"Basic html-rendering handlers.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom tornado import web\n\nfrom .. import orm\nfrom ..utils import admin_only, url_path_join\nfrom .base import BaseHandler\n\n\nclass RootHandler(BaseHandler):\n \"\"\"Render the Hub root page.\n \n Currently redirects to home if logged in,\n shows big fat login button otherwise.\n \"\"\"\n def get(self):\n if self.get_current_user():\n self.redirect(\n url_path_join(self.hub.server.base_url, 'home'),\n permanent=False,\n )\n return\n \n html = self.render_template('index.html',\n login_url=self.settings['login_url'],\n )\n self.finish(html)\n\nclass HomeHandler(BaseHandler):\n \"\"\"Render the user's home page.\"\"\"\n\n @web.authenticated\n def get(self):\n html = self.render_template('home.html',\n user=self.get_current_user(),\n )\n self.finish(html)\n\n\nclass AdminHandler(BaseHandler):\n \"\"\"Render the admin page.\"\"\"\n\n @admin_only\n def get(self):\n html = self.render_template('admin.html',\n user=self.get_current_user(),\n users=self.db.query(orm.User),\n admin_access=self.settings.get('admin_access', False),\n )\n self.finish(html)\n\n\ndefault_handlers = [\n (r'/', RootHandler),\n (r'/home', HomeHandler),\n (r'/admin', AdminHandler),\n]\n", "path": "jupyterhub/handlers/pages.py"}, {"content": "\"\"\"Base API handlers\"\"\"\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport json\n\nfrom http.client import responses\n\nfrom tornado import web\n\nfrom ..handlers import BaseHandler\n\nclass APIHandler(BaseHandler):\n def get_json_body(self):\n \"\"\"Return the body of the request as JSON data.\"\"\"\n if not self.request.body:\n return None\n body = self.request.body.strip().decode('utf-8')\n try:\n model = json.loads(body)\n except Exception:\n self.log.debug(\"Bad JSON: %r\", body)\n self.log.error(\"Couldn't parse JSON\", exc_info=True)\n raise web.HTTPError(400, 'Invalid JSON in body of request')\n return model\n \n \n def write_error(self, status_code, **kwargs):\n \"\"\"Write JSON errors instead of HTML\"\"\"\n exc_info = kwargs.get('exc_info')\n message = ''\n status_message = responses.get(status_code, 'Unknown Error')\n if exc_info:\n exception = exc_info[1]\n # get the custom message, if defined\n try:\n message = exception.log_message % exception.args\n except Exception:\n pass\n\n # construct the custom reason, if defined\n reason = getattr(exception, 'reason', '')\n if reason:\n status_message = reason\n \n self.write(json.dumps({\n 'status': status_code,\n 'message': message or status_message,\n }))\n", "path": "jupyterhub/apihandlers/base.py"}]} | 1,597 | 629 |
gh_patches_debug_1499 | rasdani/github-patches | git_diff | inventree__InvenTree-5627 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stocktake doesn't save parts with no stock
### Please verify that this bug has NOT been raised before.
- [X] I checked and didn't find a similar issue
### Describe the bug*
Stocktake is ignoring active parts with 0 stock. (see https://github.com/inventree/InvenTree/blob/master/InvenTree/part/stocktake.py#L252-L254)
### Steps to Reproduce
1. Add a Part
2. Give it some Stock
3. Run stocktake
4. Sell all the Stock
5. Run stocktake again
6. In the Parts stocktake you'll see no new ("0") entry
### Expected behaviour
If I have an active part and I run stocktake, I expect the Part to be noted down with "0 Stock at DateTime".
### Deployment Method
- [X] Docker
- [ ] Bare metal
### Version Information
# Version Information:
InvenTree-Version: 0.13.0 dev
Django Version: 3.2.21
Commit Hash: 2b0d81f
Commit Date: 2023-09-25
Database: postgresql
Debug-Mode: False
Deployed using Docker: True
Platform: Linux-5.15.0-82-generic-x86_64-with
Installer: DOC
Active plugins: False
### Please verify if you can reproduce this bug on the demo site.
- [X] I can reproduce this bug on the demo site.
### Relevant log output
_No response_
</issue>
<code>
[start of InvenTree/part/stocktake.py]
1 """Stocktake report functionality"""
2
3 import io
4 import logging
5 import time
6 from datetime import datetime
7
8 from django.contrib.auth.models import User
9 from django.core.files.base import ContentFile
10 from django.utils.translation import gettext_lazy as _
11
12 import tablib
13 from djmoney.contrib.exchange.models import convert_money
14 from djmoney.money import Money
15
16 import common.models
17 import InvenTree.helpers
18 import part.models
19 import stock.models
20
21 logger = logging.getLogger('inventree')
22
23
24 def perform_stocktake(target: part.models.Part, user: User, note: str = '', commit=True, **kwargs):
25 """Perform stocktake action on a single part.
26
27 arguments:
28 target: A single Part model instance
29 commit: If True (default) save the result to the database
30 user: User who requested this stocktake
31
32 kwargs:
33 exclude_external: If True, exclude stock items in external locations (default = False)
34 location: Optional StockLocation to filter results for generated report
35
36 Returns:
37 PartStocktake: A new PartStocktake model instance (for the specified Part)
38
39 Note that while we record a *total stocktake* for the Part instance which gets saved to the database,
40 the user may have requested a stocktake limited to a particular location.
41
42 In this case, the stocktake *report* will be limited to the specified location.
43 """
44
45 # Determine which locations are "valid" for the generated report
46 location = kwargs.get('location', None)
47 locations = location.get_descendants(include_self=True) if location else []
48
49 # Grab all "available" stock items for the Part
50 # We do not include variant stock when performing a stocktake,
51 # otherwise the stocktake entries will be duplicated
52 stock_entries = target.stock_entries(in_stock=True, include_variants=False)
53
54 exclude_external = kwargs.get('exclude_external', False)
55
56 if exclude_external:
57 stock_entries = stock_entries.exclude(location__external=True)
58
59 # Cache min/max pricing information for this Part
60 pricing = target.pricing
61
62 if not pricing.is_valid:
63 # If pricing is not valid, let's update
64 logger.info("Pricing not valid for %s - updating", target)
65 pricing.update_pricing(cascade=False)
66 pricing.refresh_from_db()
67
68 base_currency = common.settings.currency_code_default()
69
70 # Keep track of total quantity and cost for this part
71 total_quantity = 0
72 total_cost_min = Money(0, base_currency)
73 total_cost_max = Money(0, base_currency)
74
75 # Separately, keep track of stock quantity and value within the specified location
76 location_item_count = 0
77 location_quantity = 0
78 location_cost_min = Money(0, base_currency)
79 location_cost_max = Money(0, base_currency)
80
81 for entry in stock_entries:
82
83 entry_cost_min = None
84 entry_cost_max = None
85
86 # Update price range values
87 if entry.purchase_price:
88 entry_cost_min = entry.purchase_price
89 entry_cost_max = entry.purchase_price
90
91 else:
92 # If no purchase price is available, fall back to the part pricing data
93 entry_cost_min = pricing.overall_min or pricing.overall_max
94 entry_cost_max = pricing.overall_max or pricing.overall_min
95
96 # Convert to base currency
97 try:
98 entry_cost_min = convert_money(entry_cost_min, base_currency) * entry.quantity
99 entry_cost_max = convert_money(entry_cost_max, base_currency) * entry.quantity
100 except Exception:
101
102 entry_cost_min = Money(0, base_currency)
103 entry_cost_max = Money(0, base_currency)
104
105 # Update total cost values
106 total_quantity += entry.quantity
107 total_cost_min += entry_cost_min
108 total_cost_max += entry_cost_max
109
110 # Test if this stock item is within the specified location
111 if location and entry.location not in locations:
112 continue
113
114 # Update location cost values
115 location_item_count += 1
116 location_quantity += entry.quantity
117 location_cost_min += entry_cost_min
118 location_cost_max += entry_cost_max
119
120 # Construct PartStocktake instance
121 # Note that we use the *total* values for the PartStocktake instance
122 instance = part.models.PartStocktake(
123 part=target,
124 item_count=stock_entries.count(),
125 quantity=total_quantity,
126 cost_min=total_cost_min,
127 cost_max=total_cost_max,
128 note=note,
129 user=user,
130 )
131
132 if commit:
133 instance.save()
134
135 # Add location-specific data to the instance
136 instance.location_item_count = location_item_count
137 instance.location_quantity = location_quantity
138 instance.location_cost_min = location_cost_min
139 instance.location_cost_max = location_cost_max
140
141 return instance
142
143
144 def generate_stocktake_report(**kwargs):
145 """Generated a new stocktake report.
146
147 Note that this method should be called only by the background worker process!
148
149 Unless otherwise specified, the stocktake report is generated for *all* Part instances.
150 Optional filters can by supplied via the kwargs
151
152 kwargs:
153 user: The user who requested this stocktake (set to None for automated stocktake)
154 part: Optional Part instance to filter by (including variant parts)
155 category: Optional PartCategory to filter results
156 location: Optional StockLocation to filter results
157 exclude_external: If True, exclude stock items in external locations (default = False)
158 generate_report: If True, generate a stocktake report from the calculated data (default=True)
159 update_parts: If True, save stocktake information against each filtered Part (default = True)
160 """
161
162 # Determine if external locations should be excluded
163 exclude_external = kwargs.get(
164 'exclude_exernal',
165 common.models.InvenTreeSetting.get_setting('STOCKTAKE_EXCLUDE_EXTERNAL', False)
166 )
167
168 parts = part.models.Part.objects.all()
169 user = kwargs.get('user', None)
170
171 generate_report = kwargs.get('generate_report', True)
172 update_parts = kwargs.get('update_parts', True)
173
174 # Filter by 'Part' instance
175 if p := kwargs.get('part', None):
176 variants = p.get_descendants(include_self=True)
177 parts = parts.filter(
178 pk__in=[v.pk for v in variants]
179 )
180
181 # Filter by 'Category' instance (cascading)
182 if category := kwargs.get('category', None):
183 categories = category.get_descendants(include_self=True)
184 parts = parts.filter(category__in=categories)
185
186 # Filter by 'Location' instance (cascading)
187 # Stocktake report will be limited to parts which have stock items within this location
188 if location := kwargs.get('location', None):
189 # Extract flat list of all sublocations
190 locations = list(location.get_descendants(include_self=True))
191
192 # Items which exist within these locations
193 items = stock.models.StockItem.objects.filter(location__in=locations)
194
195 if exclude_external:
196 items = items.exclude(location__external=True)
197
198 # List of parts which exist within these locations
199 unique_parts = items.order_by().values('part').distinct()
200
201 parts = parts.filter(
202 pk__in=[result['part'] for result in unique_parts]
203 )
204
205 # Exit if filters removed all parts
206 n_parts = parts.count()
207
208 if n_parts == 0:
209 logger.info("No parts selected for stocktake report - exiting")
210 return
211
212 logger.info("Generating new stocktake report for %s parts", n_parts)
213
214 base_currency = common.settings.currency_code_default()
215
216 # Construct an initial dataset for the stocktake report
217 dataset = tablib.Dataset(
218 headers=[
219 _('Part ID'),
220 _('Part Name'),
221 _('Part Description'),
222 _('Category ID'),
223 _('Category Name'),
224 _('Stock Items'),
225 _('Total Quantity'),
226 _('Total Cost Min') + f' ({base_currency})',
227 _('Total Cost Max') + f' ({base_currency})',
228 ]
229 )
230
231 parts = parts.prefetch_related('category', 'stock_items')
232
233 # Simple profiling for this task
234 t_start = time.time()
235
236 # Keep track of each individual "stocktake" we perform.
237 # They may be bulk-commited to the database afterwards
238 stocktake_instances = []
239
240 total_parts = 0
241
242 # Iterate through each Part which matches the filters above
243 for p in parts:
244
245 # Create a new stocktake for this part (do not commit, this will take place later on)
246 stocktake = perform_stocktake(
247 p, user, commit=False,
248 exclude_external=exclude_external,
249 location=location,
250 )
251
252 if stocktake.quantity == 0:
253 # Skip rows with zero total quantity
254 continue
255
256 total_parts += 1
257
258 stocktake_instances.append(stocktake)
259
260 # Add a row to the dataset
261 dataset.append([
262 p.pk,
263 p.full_name,
264 p.description,
265 p.category.pk if p.category else '',
266 p.category.name if p.category else '',
267 stocktake.location_item_count,
268 stocktake.location_quantity,
269 InvenTree.helpers.normalize(stocktake.location_cost_min.amount),
270 InvenTree.helpers.normalize(stocktake.location_cost_max.amount),
271 ])
272
273 # Save a new PartStocktakeReport instance
274 buffer = io.StringIO()
275 buffer.write(dataset.export('csv'))
276
277 today = datetime.now().date().isoformat()
278 filename = f"InvenTree_Stocktake_{today}.csv"
279 report_file = ContentFile(buffer.getvalue(), name=filename)
280
281 if generate_report:
282 report_instance = part.models.PartStocktakeReport.objects.create(
283 report=report_file,
284 part_count=total_parts,
285 user=user
286 )
287
288 # Notify the requesting user
289 if user:
290
291 common.notifications.trigger_notification(
292 report_instance,
293 category='generate_stocktake_report',
294 context={
295 'name': _('Stocktake Report Available'),
296 'message': _('A new stocktake report is available for download'),
297 },
298 targets=[
299 user,
300 ]
301 )
302
303 # If 'update_parts' is set, we save stocktake entries for each individual part
304 if update_parts:
305 # Use bulk_create for efficient insertion of stocktake
306 part.models.PartStocktake.objects.bulk_create(
307 stocktake_instances,
308 batch_size=500,
309 )
310
311 t_stocktake = time.time() - t_start
312 logger.info("Generated stocktake report for %s parts in %ss", total_parts, round(t_stocktake, 2))
313
[end of InvenTree/part/stocktake.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/part/stocktake.py b/InvenTree/part/stocktake.py
--- a/InvenTree/part/stocktake.py
+++ b/InvenTree/part/stocktake.py
@@ -249,10 +249,6 @@
location=location,
)
- if stocktake.quantity == 0:
- # Skip rows with zero total quantity
- continue
-
total_parts += 1
stocktake_instances.append(stocktake)
| {"golden_diff": "diff --git a/InvenTree/part/stocktake.py b/InvenTree/part/stocktake.py\n--- a/InvenTree/part/stocktake.py\n+++ b/InvenTree/part/stocktake.py\n@@ -249,10 +249,6 @@\n location=location,\n )\n \n- if stocktake.quantity == 0:\n- # Skip rows with zero total quantity\n- continue\n-\n total_parts += 1\n \n stocktake_instances.append(stocktake)\n", "issue": "Stocktake doesn't save parts with no stock\n### Please verify that this bug has NOT been raised before.\r\n\r\n- [X] I checked and didn't find a similar issue\r\n\r\n### Describe the bug*\r\n\r\nStocktake is ignoring active parts with 0 stock. (see https://github.com/inventree/InvenTree/blob/master/InvenTree/part/stocktake.py#L252-L254)\r\n\r\n### Steps to Reproduce\r\n\r\n1. Add a Part\r\n2. Give it some Stock\r\n3. Run stocktake\r\n4. Sell all the Stock\r\n5. Run stocktake again\r\n6. In the Parts stocktake you'll see no new (\"0\") entry \r\n\r\n### Expected behaviour\r\n\r\nIf I have an active part and I run stocktake, I expect the Part to be noted down with \"0 Stock at DateTime\".\r\n\r\n### Deployment Method\r\n\r\n- [X] Docker\r\n- [ ] Bare metal\r\n\r\n### Version Information\r\n\r\n# Version Information:\r\nInvenTree-Version: 0.13.0 dev\r\nDjango Version: 3.2.21\r\nCommit Hash: 2b0d81f\r\nCommit Date: 2023-09-25\r\n\r\nDatabase: postgresql\r\nDebug-Mode: False\r\nDeployed using Docker: True\r\nPlatform: Linux-5.15.0-82-generic-x86_64-with\r\nInstaller: DOC\r\n\r\nActive plugins: False\r\n\r\n\r\n### Please verify if you can reproduce this bug on the demo site.\r\n\r\n- [X] I can reproduce this bug on the demo site.\r\n\r\n### Relevant log output\r\n\r\n_No response_\n", "before_files": [{"content": "\"\"\"Stocktake report functionality\"\"\"\n\nimport io\nimport logging\nimport time\nfrom datetime import datetime\n\nfrom django.contrib.auth.models import User\nfrom django.core.files.base import ContentFile\nfrom django.utils.translation import gettext_lazy as _\n\nimport tablib\nfrom djmoney.contrib.exchange.models import convert_money\nfrom djmoney.money import Money\n\nimport common.models\nimport InvenTree.helpers\nimport part.models\nimport stock.models\n\nlogger = logging.getLogger('inventree')\n\n\ndef perform_stocktake(target: part.models.Part, user: User, note: str = '', commit=True, **kwargs):\n \"\"\"Perform stocktake action on a single part.\n\n arguments:\n target: A single Part model instance\n commit: If True (default) save the result to the database\n user: User who requested this stocktake\n\n kwargs:\n exclude_external: If True, exclude stock items in external locations (default = False)\n location: Optional StockLocation to filter results for generated report\n\n Returns:\n PartStocktake: A new PartStocktake model instance (for the specified Part)\n\n Note that while we record a *total stocktake* for the Part instance which gets saved to the database,\n the user may have requested a stocktake limited to a particular location.\n\n In this case, the stocktake *report* will be limited to the specified location.\n \"\"\"\n\n # Determine which locations are \"valid\" for the generated report\n location = kwargs.get('location', None)\n locations = location.get_descendants(include_self=True) if location else []\n\n # Grab all \"available\" stock items for the Part\n # We do not include variant stock when performing a stocktake,\n # otherwise the stocktake entries will be duplicated\n stock_entries = target.stock_entries(in_stock=True, include_variants=False)\n\n exclude_external = kwargs.get('exclude_external', False)\n\n if exclude_external:\n stock_entries = stock_entries.exclude(location__external=True)\n\n # Cache min/max pricing information for this Part\n pricing = target.pricing\n\n if not pricing.is_valid:\n # If pricing is not valid, let's update\n logger.info(\"Pricing not valid for %s - updating\", target)\n pricing.update_pricing(cascade=False)\n pricing.refresh_from_db()\n\n base_currency = common.settings.currency_code_default()\n\n # Keep track of total quantity and cost for this part\n total_quantity = 0\n total_cost_min = Money(0, base_currency)\n total_cost_max = Money(0, base_currency)\n\n # Separately, keep track of stock quantity and value within the specified location\n location_item_count = 0\n location_quantity = 0\n location_cost_min = Money(0, base_currency)\n location_cost_max = Money(0, base_currency)\n\n for entry in stock_entries:\n\n entry_cost_min = None\n entry_cost_max = None\n\n # Update price range values\n if entry.purchase_price:\n entry_cost_min = entry.purchase_price\n entry_cost_max = entry.purchase_price\n\n else:\n # If no purchase price is available, fall back to the part pricing data\n entry_cost_min = pricing.overall_min or pricing.overall_max\n entry_cost_max = pricing.overall_max or pricing.overall_min\n\n # Convert to base currency\n try:\n entry_cost_min = convert_money(entry_cost_min, base_currency) * entry.quantity\n entry_cost_max = convert_money(entry_cost_max, base_currency) * entry.quantity\n except Exception:\n\n entry_cost_min = Money(0, base_currency)\n entry_cost_max = Money(0, base_currency)\n\n # Update total cost values\n total_quantity += entry.quantity\n total_cost_min += entry_cost_min\n total_cost_max += entry_cost_max\n\n # Test if this stock item is within the specified location\n if location and entry.location not in locations:\n continue\n\n # Update location cost values\n location_item_count += 1\n location_quantity += entry.quantity\n location_cost_min += entry_cost_min\n location_cost_max += entry_cost_max\n\n # Construct PartStocktake instance\n # Note that we use the *total* values for the PartStocktake instance\n instance = part.models.PartStocktake(\n part=target,\n item_count=stock_entries.count(),\n quantity=total_quantity,\n cost_min=total_cost_min,\n cost_max=total_cost_max,\n note=note,\n user=user,\n )\n\n if commit:\n instance.save()\n\n # Add location-specific data to the instance\n instance.location_item_count = location_item_count\n instance.location_quantity = location_quantity\n instance.location_cost_min = location_cost_min\n instance.location_cost_max = location_cost_max\n\n return instance\n\n\ndef generate_stocktake_report(**kwargs):\n \"\"\"Generated a new stocktake report.\n\n Note that this method should be called only by the background worker process!\n\n Unless otherwise specified, the stocktake report is generated for *all* Part instances.\n Optional filters can by supplied via the kwargs\n\n kwargs:\n user: The user who requested this stocktake (set to None for automated stocktake)\n part: Optional Part instance to filter by (including variant parts)\n category: Optional PartCategory to filter results\n location: Optional StockLocation to filter results\n exclude_external: If True, exclude stock items in external locations (default = False)\n generate_report: If True, generate a stocktake report from the calculated data (default=True)\n update_parts: If True, save stocktake information against each filtered Part (default = True)\n \"\"\"\n\n # Determine if external locations should be excluded\n exclude_external = kwargs.get(\n 'exclude_exernal',\n common.models.InvenTreeSetting.get_setting('STOCKTAKE_EXCLUDE_EXTERNAL', False)\n )\n\n parts = part.models.Part.objects.all()\n user = kwargs.get('user', None)\n\n generate_report = kwargs.get('generate_report', True)\n update_parts = kwargs.get('update_parts', True)\n\n # Filter by 'Part' instance\n if p := kwargs.get('part', None):\n variants = p.get_descendants(include_self=True)\n parts = parts.filter(\n pk__in=[v.pk for v in variants]\n )\n\n # Filter by 'Category' instance (cascading)\n if category := kwargs.get('category', None):\n categories = category.get_descendants(include_self=True)\n parts = parts.filter(category__in=categories)\n\n # Filter by 'Location' instance (cascading)\n # Stocktake report will be limited to parts which have stock items within this location\n if location := kwargs.get('location', None):\n # Extract flat list of all sublocations\n locations = list(location.get_descendants(include_self=True))\n\n # Items which exist within these locations\n items = stock.models.StockItem.objects.filter(location__in=locations)\n\n if exclude_external:\n items = items.exclude(location__external=True)\n\n # List of parts which exist within these locations\n unique_parts = items.order_by().values('part').distinct()\n\n parts = parts.filter(\n pk__in=[result['part'] for result in unique_parts]\n )\n\n # Exit if filters removed all parts\n n_parts = parts.count()\n\n if n_parts == 0:\n logger.info(\"No parts selected for stocktake report - exiting\")\n return\n\n logger.info(\"Generating new stocktake report for %s parts\", n_parts)\n\n base_currency = common.settings.currency_code_default()\n\n # Construct an initial dataset for the stocktake report\n dataset = tablib.Dataset(\n headers=[\n _('Part ID'),\n _('Part Name'),\n _('Part Description'),\n _('Category ID'),\n _('Category Name'),\n _('Stock Items'),\n _('Total Quantity'),\n _('Total Cost Min') + f' ({base_currency})',\n _('Total Cost Max') + f' ({base_currency})',\n ]\n )\n\n parts = parts.prefetch_related('category', 'stock_items')\n\n # Simple profiling for this task\n t_start = time.time()\n\n # Keep track of each individual \"stocktake\" we perform.\n # They may be bulk-commited to the database afterwards\n stocktake_instances = []\n\n total_parts = 0\n\n # Iterate through each Part which matches the filters above\n for p in parts:\n\n # Create a new stocktake for this part (do not commit, this will take place later on)\n stocktake = perform_stocktake(\n p, user, commit=False,\n exclude_external=exclude_external,\n location=location,\n )\n\n if stocktake.quantity == 0:\n # Skip rows with zero total quantity\n continue\n\n total_parts += 1\n\n stocktake_instances.append(stocktake)\n\n # Add a row to the dataset\n dataset.append([\n p.pk,\n p.full_name,\n p.description,\n p.category.pk if p.category else '',\n p.category.name if p.category else '',\n stocktake.location_item_count,\n stocktake.location_quantity,\n InvenTree.helpers.normalize(stocktake.location_cost_min.amount),\n InvenTree.helpers.normalize(stocktake.location_cost_max.amount),\n ])\n\n # Save a new PartStocktakeReport instance\n buffer = io.StringIO()\n buffer.write(dataset.export('csv'))\n\n today = datetime.now().date().isoformat()\n filename = f\"InvenTree_Stocktake_{today}.csv\"\n report_file = ContentFile(buffer.getvalue(), name=filename)\n\n if generate_report:\n report_instance = part.models.PartStocktakeReport.objects.create(\n report=report_file,\n part_count=total_parts,\n user=user\n )\n\n # Notify the requesting user\n if user:\n\n common.notifications.trigger_notification(\n report_instance,\n category='generate_stocktake_report',\n context={\n 'name': _('Stocktake Report Available'),\n 'message': _('A new stocktake report is available for download'),\n },\n targets=[\n user,\n ]\n )\n\n # If 'update_parts' is set, we save stocktake entries for each individual part\n if update_parts:\n # Use bulk_create for efficient insertion of stocktake\n part.models.PartStocktake.objects.bulk_create(\n stocktake_instances,\n batch_size=500,\n )\n\n t_stocktake = time.time() - t_start\n logger.info(\"Generated stocktake report for %s parts in %ss\", total_parts, round(t_stocktake, 2))\n", "path": "InvenTree/part/stocktake.py"}]} | 3,977 | 111 |
gh_patches_debug_15429 | rasdani/github-patches | git_diff | ipython__ipython-10264 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TAB key does not indent
In the git version of IPython, type
```
def foo(a):
pass<ENTER>
```
The new line is not correctly indented, and the tab key does not insert 4 spaces.
/cc @Carreau @takluyver
</issue>
<code>
[start of IPython/terminal/shortcuts.py]
1 import signal
2 import sys
3
4 from prompt_toolkit.enums import DEFAULT_BUFFER, SEARCH_BUFFER
5 from prompt_toolkit.filters import (HasFocus, HasSelection, Condition,
6 ViInsertMode, EmacsInsertMode, HasCompletions)
7 from prompt_toolkit.filters.cli import ViMode, ViNavigationMode
8 from prompt_toolkit.keys import Keys
9 from prompt_toolkit.key_binding.bindings.completion import display_completions_like_readline
10
11 from IPython.utils.decorators import undoc
12
13 @Condition
14 def cursor_in_leading_ws(cli):
15 before = cli.application.buffer.document.current_line_before_cursor
16 return (not before) or before.isspace()
17
18 def register_ipython_shortcuts(registry, shell):
19 """Set up the prompt_toolkit keyboard shortcuts for IPython"""
20 insert_mode = ViInsertMode() | EmacsInsertMode()
21
22 # Ctrl+J == Enter, seemingly
23 registry.add_binding(Keys.ControlJ,
24 filter=(HasFocus(DEFAULT_BUFFER)
25 & ~HasSelection()
26 & insert_mode
27 ))(newline_or_execute_outer(shell))
28
29 registry.add_binding(Keys.ControlBackslash)(force_exit)
30
31 registry.add_binding(Keys.ControlP,
32 filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)
33 ))(previous_history_or_previous_completion)
34
35 registry.add_binding(Keys.ControlN,
36 filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)
37 ))(next_history_or_next_completion)
38
39 registry.add_binding(Keys.ControlG,
40 filter=(HasFocus(DEFAULT_BUFFER) & HasCompletions()
41 ))(dismiss_completion)
42
43 registry.add_binding(Keys.ControlC, filter=HasFocus(DEFAULT_BUFFER)
44 )(reset_buffer)
45
46 registry.add_binding(Keys.ControlC, filter=HasFocus(SEARCH_BUFFER)
47 )(reset_search_buffer)
48
49 supports_suspend = Condition(lambda cli: hasattr(signal, 'SIGTSTP'))
50 registry.add_binding(Keys.ControlZ, filter=supports_suspend
51 )(suspend_to_bg)
52
53 # Ctrl+I == Tab
54 registry.add_binding(Keys.ControlI,
55 filter=(HasFocus(DEFAULT_BUFFER)
56 & ~HasSelection()
57 & insert_mode
58 & cursor_in_leading_ws
59 ))(indent_buffer)
60
61 registry.add_binding(Keys.ControlO,
62 filter=(HasFocus(DEFAULT_BUFFER)
63 & EmacsInsertMode()))(newline_with_copy_margin)
64
65 registry.add_binding(Keys.F2,
66 filter=HasFocus(DEFAULT_BUFFER)
67 )(open_input_in_editor)
68
69 registry.add_binding('v',
70 filter=HasFocus(DEFAULT_BUFFER) & ViNavigationMode()
71 )(open_input_in_editor)
72
73 if shell.display_completions == 'readlinelike':
74 registry.add_binding(Keys.ControlI,
75 filter=(HasFocus(DEFAULT_BUFFER)
76 & ~HasSelection()
77 & insert_mode
78 & ~cursor_in_leading_ws
79 ))(display_completions_like_readline)
80
81 if sys.platform == 'win32':
82 registry.add_binding(Keys.ControlV,
83 filter=(
84 HasFocus(
85 DEFAULT_BUFFER) & ~ViMode()
86 ))(win_paste)
87
88
89 def newline_or_execute_outer(shell):
90 def newline_or_execute(event):
91 """When the user presses return, insert a newline or execute the code."""
92 b = event.current_buffer
93 d = b.document
94
95 if b.complete_state:
96 cc = b.complete_state.current_completion
97 if cc:
98 b.apply_completion(cc)
99 else:
100 b.cancel_completion()
101 return
102
103 if not (d.on_last_line or d.cursor_position_row >= d.line_count
104 - d.empty_line_count_at_the_end()):
105 b.newline()
106 return
107
108 status, indent = shell.input_splitter.check_complete(d.text + '\n')
109
110 if (status != 'incomplete') and b.accept_action.is_returnable:
111 b.accept_action.validate_and_handle(event.cli, b)
112 else:
113 b.insert_text('\n' + (' ' * (indent or 0)))
114 return newline_or_execute
115
116
117 def previous_history_or_previous_completion(event):
118 """
119 Control-P in vi edit mode on readline is history next, unlike default prompt toolkit.
120
121 If completer is open this still select previous completion.
122 """
123 event.current_buffer.auto_up()
124
125
126 def next_history_or_next_completion(event):
127 """
128 Control-N in vi edit mode on readline is history previous, unlike default prompt toolkit.
129
130 If completer is open this still select next completion.
131 """
132 event.current_buffer.auto_down()
133
134
135 def dismiss_completion(event):
136 b = event.current_buffer
137 if b.complete_state:
138 b.cancel_completion()
139
140
141 def reset_buffer(event):
142 b = event.current_buffer
143 if b.complete_state:
144 b.cancel_completion()
145 else:
146 b.reset()
147
148
149 def reset_search_buffer(event):
150 if event.current_buffer.document.text:
151 event.current_buffer.reset()
152 else:
153 event.cli.push_focus(DEFAULT_BUFFER)
154
155 def suspend_to_bg(event):
156 event.cli.suspend_to_background()
157
158 def force_exit(event):
159 """
160 Force exit (with a non-zero return value)
161 """
162 sys.exit("Quit")
163
164 def indent_buffer(event):
165 event.current_buffer.insert_text(' ' * 4)
166
167 def newline_with_copy_margin(event):
168 """
169 Preserve margin and cursor position when using
170 Control-O to insert a newline in EMACS mode
171 """
172 b = event.current_buffer
173 cursor_start_pos = b.document.cursor_position_col
174 b.newline(copy_margin=True)
175 b.cursor_up(count=1)
176 cursor_end_pos = b.document.cursor_position_col
177 if cursor_start_pos != cursor_end_pos:
178 pos_diff = cursor_start_pos - cursor_end_pos
179 b.cursor_right(count=pos_diff)
180
181 def open_input_in_editor(event):
182 event.cli.current_buffer.tempfile_suffix = ".py"
183 event.cli.current_buffer.open_in_editor(event.cli)
184
185
186 if sys.platform == 'win32':
187 from IPython.core.error import TryNext
188 from IPython.lib.clipboard import (ClipboardEmpty,
189 win32_clipboard_get,
190 tkinter_clipboard_get)
191
192 @undoc
193 def win_paste(event):
194 try:
195 text = win32_clipboard_get()
196 except TryNext:
197 try:
198 text = tkinter_clipboard_get()
199 except (TryNext, ClipboardEmpty):
200 return
201 except ClipboardEmpty:
202 return
203 event.current_buffer.insert_text(text.replace('\t', ' ' * 4))
204
[end of IPython/terminal/shortcuts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/terminal/shortcuts.py b/IPython/terminal/shortcuts.py
--- a/IPython/terminal/shortcuts.py
+++ b/IPython/terminal/shortcuts.py
@@ -100,12 +100,14 @@
b.cancel_completion()
return
- if not (d.on_last_line or d.cursor_position_row >= d.line_count
- - d.empty_line_count_at_the_end()):
- b.newline()
- return
+ before_text = d.text[:d.cursor_position]
+ status, indent = shell.input_splitter.check_complete(before_text + '\n')
- status, indent = shell.input_splitter.check_complete(d.text + '\n')
+ if not (d.on_last_line or
+ d.cursor_position_row >= d.line_count - d.empty_line_count_at_the_end()
+ ):
+ b.insert_text('\n' + (' ' * (indent or 0)))
+ return
if (status != 'incomplete') and b.accept_action.is_returnable:
b.accept_action.validate_and_handle(event.cli, b)
| {"golden_diff": "diff --git a/IPython/terminal/shortcuts.py b/IPython/terminal/shortcuts.py\n--- a/IPython/terminal/shortcuts.py\n+++ b/IPython/terminal/shortcuts.py\n@@ -100,12 +100,14 @@\n b.cancel_completion()\n return\n \n- if not (d.on_last_line or d.cursor_position_row >= d.line_count\n- - d.empty_line_count_at_the_end()):\n- b.newline()\n- return\n+ before_text = d.text[:d.cursor_position]\n+ status, indent = shell.input_splitter.check_complete(before_text + '\\n')\n \n- status, indent = shell.input_splitter.check_complete(d.text + '\\n')\n+ if not (d.on_last_line or\n+ d.cursor_position_row >= d.line_count - d.empty_line_count_at_the_end()\n+ ):\n+ b.insert_text('\\n' + (' ' * (indent or 0)))\n+ return\n \n if (status != 'incomplete') and b.accept_action.is_returnable:\n b.accept_action.validate_and_handle(event.cli, b)\n", "issue": "TAB key does not indent\nIn the git version of IPython, type\n\n```\ndef foo(a):\n pass<ENTER>\n```\n\nThe new line is not correctly indented, and the tab key does not insert 4 spaces.\n\n/cc @Carreau @takluyver \n\n", "before_files": [{"content": "import signal\nimport sys\n\nfrom prompt_toolkit.enums import DEFAULT_BUFFER, SEARCH_BUFFER\nfrom prompt_toolkit.filters import (HasFocus, HasSelection, Condition,\n ViInsertMode, EmacsInsertMode, HasCompletions)\nfrom prompt_toolkit.filters.cli import ViMode, ViNavigationMode\nfrom prompt_toolkit.keys import Keys\nfrom prompt_toolkit.key_binding.bindings.completion import display_completions_like_readline\n\nfrom IPython.utils.decorators import undoc\n\n@Condition\ndef cursor_in_leading_ws(cli):\n before = cli.application.buffer.document.current_line_before_cursor\n return (not before) or before.isspace()\n\ndef register_ipython_shortcuts(registry, shell):\n \"\"\"Set up the prompt_toolkit keyboard shortcuts for IPython\"\"\"\n insert_mode = ViInsertMode() | EmacsInsertMode()\n\n # Ctrl+J == Enter, seemingly\n registry.add_binding(Keys.ControlJ,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n ))(newline_or_execute_outer(shell))\n\n registry.add_binding(Keys.ControlBackslash)(force_exit)\n\n registry.add_binding(Keys.ControlP,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(previous_history_or_previous_completion)\n\n registry.add_binding(Keys.ControlN,\n filter=(ViInsertMode() & HasFocus(DEFAULT_BUFFER)\n ))(next_history_or_next_completion)\n\n registry.add_binding(Keys.ControlG,\n filter=(HasFocus(DEFAULT_BUFFER) & HasCompletions()\n ))(dismiss_completion)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(DEFAULT_BUFFER)\n )(reset_buffer)\n\n registry.add_binding(Keys.ControlC, filter=HasFocus(SEARCH_BUFFER)\n )(reset_search_buffer)\n\n supports_suspend = Condition(lambda cli: hasattr(signal, 'SIGTSTP'))\n registry.add_binding(Keys.ControlZ, filter=supports_suspend\n )(suspend_to_bg)\n\n # Ctrl+I == Tab\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & cursor_in_leading_ws\n ))(indent_buffer)\n\n registry.add_binding(Keys.ControlO,\n filter=(HasFocus(DEFAULT_BUFFER)\n & EmacsInsertMode()))(newline_with_copy_margin)\n\n registry.add_binding(Keys.F2,\n filter=HasFocus(DEFAULT_BUFFER)\n )(open_input_in_editor)\n\n registry.add_binding('v',\n filter=HasFocus(DEFAULT_BUFFER) & ViNavigationMode()\n )(open_input_in_editor)\n\n if shell.display_completions == 'readlinelike':\n registry.add_binding(Keys.ControlI,\n filter=(HasFocus(DEFAULT_BUFFER)\n & ~HasSelection()\n & insert_mode\n & ~cursor_in_leading_ws\n ))(display_completions_like_readline)\n\n if sys.platform == 'win32':\n registry.add_binding(Keys.ControlV,\n filter=(\n HasFocus(\n DEFAULT_BUFFER) & ~ViMode()\n ))(win_paste)\n\n\ndef newline_or_execute_outer(shell):\n def newline_or_execute(event):\n \"\"\"When the user presses return, insert a newline or execute the code.\"\"\"\n b = event.current_buffer\n d = b.document\n\n if b.complete_state:\n cc = b.complete_state.current_completion\n if cc:\n b.apply_completion(cc)\n else:\n b.cancel_completion()\n return\n\n if not (d.on_last_line or d.cursor_position_row >= d.line_count\n - d.empty_line_count_at_the_end()):\n b.newline()\n return\n\n status, indent = shell.input_splitter.check_complete(d.text + '\\n')\n\n if (status != 'incomplete') and b.accept_action.is_returnable:\n b.accept_action.validate_and_handle(event.cli, b)\n else:\n b.insert_text('\\n' + (' ' * (indent or 0)))\n return newline_or_execute\n\n\ndef previous_history_or_previous_completion(event):\n \"\"\"\n Control-P in vi edit mode on readline is history next, unlike default prompt toolkit.\n\n If completer is open this still select previous completion.\n \"\"\"\n event.current_buffer.auto_up()\n\n\ndef next_history_or_next_completion(event):\n \"\"\"\n Control-N in vi edit mode on readline is history previous, unlike default prompt toolkit.\n\n If completer is open this still select next completion.\n \"\"\"\n event.current_buffer.auto_down()\n\n\ndef dismiss_completion(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n\n\ndef reset_buffer(event):\n b = event.current_buffer\n if b.complete_state:\n b.cancel_completion()\n else:\n b.reset()\n\n\ndef reset_search_buffer(event):\n if event.current_buffer.document.text:\n event.current_buffer.reset()\n else:\n event.cli.push_focus(DEFAULT_BUFFER)\n\ndef suspend_to_bg(event):\n event.cli.suspend_to_background()\n\ndef force_exit(event):\n \"\"\"\n Force exit (with a non-zero return value)\n \"\"\"\n sys.exit(\"Quit\")\n\ndef indent_buffer(event):\n event.current_buffer.insert_text(' ' * 4)\n\ndef newline_with_copy_margin(event):\n \"\"\"\n Preserve margin and cursor position when using\n Control-O to insert a newline in EMACS mode\n \"\"\"\n b = event.current_buffer\n cursor_start_pos = b.document.cursor_position_col\n b.newline(copy_margin=True)\n b.cursor_up(count=1)\n cursor_end_pos = b.document.cursor_position_col\n if cursor_start_pos != cursor_end_pos:\n pos_diff = cursor_start_pos - cursor_end_pos\n b.cursor_right(count=pos_diff)\n\ndef open_input_in_editor(event):\n event.cli.current_buffer.tempfile_suffix = \".py\"\n event.cli.current_buffer.open_in_editor(event.cli)\n\n\nif sys.platform == 'win32':\n from IPython.core.error import TryNext\n from IPython.lib.clipboard import (ClipboardEmpty,\n win32_clipboard_get,\n tkinter_clipboard_get)\n\n @undoc\n def win_paste(event):\n try:\n text = win32_clipboard_get()\n except TryNext:\n try:\n text = tkinter_clipboard_get()\n except (TryNext, ClipboardEmpty):\n return\n except ClipboardEmpty:\n return\n event.current_buffer.insert_text(text.replace('\\t', ' ' * 4))\n", "path": "IPython/terminal/shortcuts.py"}]} | 2,458 | 241 |
gh_patches_debug_23562 | rasdani/github-patches | git_diff | internetarchive__openlibrary-6807 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
librarian merge queue fixes
Closes #6807
- allows flexible sorting with ?order=asc or desc -- piggy backs on #6785
- adds total counts to Open and Closed
- removes "All"
- fixes bug where page? persists when switching modes -- fixes **half** of #6782 (i.e. mode part, not submitter!)
<!-- What does this PR achieve? [feature|hotfix|fix|refactor] -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
@jimchamp
<!-- Attribution Disclaimer: By proposing this pull request, I affirm to have made a best-effort and exercised my discretion to make sure relevant sections of this code which substantially leverage code suggestions, code generation, or code snippets from sources (e.g. Stack Overflow, GitHub) have been annotated with basic attribution so reviewers & contributors may have confidence and access to the correct context to evaluate and use this code. -->
</issue>
<code>
[start of openlibrary/plugins/upstream/edits.py]
1 """Librarian Edits
2 """
3
4 import json
5 import web
6
7 from openlibrary import accounts
8 from openlibrary.core.edits import CommunityEditsQueue, get_status_for_view
9 from infogami.utils import delegate
10 from infogami.utils.view import render_template
11
12
13 def create_request(olids: str, username: str, comment: str = None):
14 work_ids = olids.split(',')
15 return CommunityEditsQueue.submit_work_merge_request(
16 work_ids,
17 submitter=username,
18 comment=comment,
19 )
20
21
22 class community_edits_queue(delegate.page):
23 path = '/merges'
24
25 def POST(self):
26 def response(status='ok', **kwargs):
27 return {'status': status, **kwargs}
28
29 i = web.input(
30 work_ids="", # Comma-separated OLIDs (OL1W,OL2W,OL3W,...,OL111W)
31 rtype="merge-works",
32 mrid=None,
33 action=None, # create, approve, decline, comment, unassign, create-merged
34 comment=None,
35 )
36 user = accounts.get_current_user()
37 username = user['key'].split('/')[-1]
38 if i.mrid: # We are updating an existing merge request
39 if i.action == 'comment':
40 if i.comment:
41 CommunityEditsQueue.comment_request(i.mrid, username, i.comment)
42 return delegate.RawText(
43 json.dumps(response()), content_type="application/json"
44 )
45 else:
46 return delegate.RawText(
47 json.dumps(
48 response(
49 status='error', error='No comment sent in request.'
50 )
51 )
52 )
53 elif i.action == 'claim':
54 result = CommunityEditsQueue.assign_request(i.mrid, username)
55 return delegate.RawText(
56 json.dumps(response(**result)), content_type="application/json"
57 )
58 elif i.action == 'unassign':
59 CommunityEditsQueue.unassign_request(i.mrid)
60 status = get_status_for_view(CommunityEditsQueue.STATUS['PENDING'])
61 return delegate.RawText(json.dumps(response(newStatus=status)))
62 else:
63 if i.action == "decline":
64 status = CommunityEditsQueue.STATUS['DECLINED']
65 elif i.action == 'approve':
66 status = CommunityEditsQueue.STATUS['MERGED']
67 CommunityEditsQueue.update_request_status(
68 i.mrid, status, username, comment=i.comment
69 )
70 return delegate.RawText(
71 json.dumps(response()), content_type="application/json"
72 )
73 elif i.rtype == "merge-works":
74 if i.action == 'create':
75 result = create_request(i.work_ids, username, i.comment)
76 resp = (
77 response(id=result)
78 if result
79 else response(
80 status='error',
81 error='A request to merge these works has already been submitted.',
82 )
83 )
84 return delegate.RawText(
85 json.dumps(resp), content_type="application/json"
86 )
87 elif i.action == 'create-merged':
88 result = CommunityEditsQueue.submit_work_merge_request(
89 i.work_ids.split(','),
90 submitter=username,
91 reviewer=username,
92 status=CommunityEditsQueue.STATUS['MERGED'],
93 )
94 return delegate.RawText(
95 json.dumps(response(id=result)), content_type='application/json'
96 )
97
98 def GET(self):
99 i = web.input(page=1, limit=25, mode="open", submitter=None, reviewer=None)
100 merge_requests = CommunityEditsQueue.get_requests(
101 page=int(i.page),
102 limit=int(i.limit),
103 mode=i.mode,
104 submitter=i.submitter,
105 reviewer=i.reviewer,
106 order='created desc',
107 ).list()
108
109 total_found = CommunityEditsQueue.get_counts_by_mode(
110 mode=i.mode, submitter=i.submitter, reviewer=i.reviewer
111 )
112 return render_template(
113 'merge_queue/merge_queue',
114 total_found,
115 merge_requests=merge_requests,
116 )
117
118 def extract_olids(self, url):
119 query_string = url.split('?')[1]
120 split_params = query_string.split('&')
121 params = {}
122 for p in split_params:
123 kv = p.split('=')
124 params[kv[0]] = kv[1]
125 return params['records'].split(',')
126
127
128 class ui_partials(delegate.page):
129 path = '/merges/partials'
130
131 def GET(self):
132 i = web.input(type=None, comment='')
133 if i.type == 'comment':
134 component = render_template('merge_queue/comment', comment_str=i.comment)
135 return delegate.RawText(component)
136
137
138 def setup():
139 pass
140
[end of openlibrary/plugins/upstream/edits.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openlibrary/plugins/upstream/edits.py b/openlibrary/plugins/upstream/edits.py
--- a/openlibrary/plugins/upstream/edits.py
+++ b/openlibrary/plugins/upstream/edits.py
@@ -96,19 +96,26 @@
)
def GET(self):
- i = web.input(page=1, limit=25, mode="open", submitter=None, reviewer=None)
+ i = web.input(
+ page=1, limit=25, mode="open", submitter=None, reviewer=None, order='desc'
+ )
merge_requests = CommunityEditsQueue.get_requests(
page=int(i.page),
limit=int(i.limit),
mode=i.mode,
submitter=i.submitter,
reviewer=i.reviewer,
- order='created desc',
+ order=f'created {i.order}',
).list()
- total_found = CommunityEditsQueue.get_counts_by_mode(
- mode=i.mode, submitter=i.submitter, reviewer=i.reviewer
- )
+ total_found = {
+ "open": CommunityEditsQueue.get_counts_by_mode(
+ mode='open', submitter=i.submitter, reviewer=i.reviewer
+ ),
+ "closed": CommunityEditsQueue.get_counts_by_mode(
+ mode='closed', submitter=i.submitter, reviewer=i.reviewer
+ ),
+ }
return render_template(
'merge_queue/merge_queue',
total_found,
| {"golden_diff": "diff --git a/openlibrary/plugins/upstream/edits.py b/openlibrary/plugins/upstream/edits.py\n--- a/openlibrary/plugins/upstream/edits.py\n+++ b/openlibrary/plugins/upstream/edits.py\n@@ -96,19 +96,26 @@\n )\n \n def GET(self):\n- i = web.input(page=1, limit=25, mode=\"open\", submitter=None, reviewer=None)\n+ i = web.input(\n+ page=1, limit=25, mode=\"open\", submitter=None, reviewer=None, order='desc'\n+ )\n merge_requests = CommunityEditsQueue.get_requests(\n page=int(i.page),\n limit=int(i.limit),\n mode=i.mode,\n submitter=i.submitter,\n reviewer=i.reviewer,\n- order='created desc',\n+ order=f'created {i.order}',\n ).list()\n \n- total_found = CommunityEditsQueue.get_counts_by_mode(\n- mode=i.mode, submitter=i.submitter, reviewer=i.reviewer\n- )\n+ total_found = {\n+ \"open\": CommunityEditsQueue.get_counts_by_mode(\n+ mode='open', submitter=i.submitter, reviewer=i.reviewer\n+ ),\n+ \"closed\": CommunityEditsQueue.get_counts_by_mode(\n+ mode='closed', submitter=i.submitter, reviewer=i.reviewer\n+ ),\n+ }\n return render_template(\n 'merge_queue/merge_queue',\n total_found,\n", "issue": "librarian merge queue fixes\nCloses #6807\r\n\r\n- allows flexible sorting with ?order=asc or desc -- piggy backs on #6785 \r\n- adds total counts to Open and Closed\r\n- removes \"All\"\r\n- fixes bug where page? persists when switching modes -- fixes **half** of #6782 (i.e. mode part, not submitter!)\r\n\r\n\r\n<!-- What does this PR achieve? [feature|hotfix|fix|refactor] -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n@jimchamp \r\n\r\n<!-- Attribution Disclaimer: By proposing this pull request, I affirm to have made a best-effort and exercised my discretion to make sure relevant sections of this code which substantially leverage code suggestions, code generation, or code snippets from sources (e.g. Stack Overflow, GitHub) have been annotated with basic attribution so reviewers & contributors may have confidence and access to the correct context to evaluate and use this code. -->\r\n\n", "before_files": [{"content": "\"\"\"Librarian Edits\n\"\"\"\n\nimport json\nimport web\n\nfrom openlibrary import accounts\nfrom openlibrary.core.edits import CommunityEditsQueue, get_status_for_view\nfrom infogami.utils import delegate\nfrom infogami.utils.view import render_template\n\n\ndef create_request(olids: str, username: str, comment: str = None):\n work_ids = olids.split(',')\n return CommunityEditsQueue.submit_work_merge_request(\n work_ids,\n submitter=username,\n comment=comment,\n )\n\n\nclass community_edits_queue(delegate.page):\n path = '/merges'\n\n def POST(self):\n def response(status='ok', **kwargs):\n return {'status': status, **kwargs}\n\n i = web.input(\n work_ids=\"\", # Comma-separated OLIDs (OL1W,OL2W,OL3W,...,OL111W)\n rtype=\"merge-works\",\n mrid=None,\n action=None, # create, approve, decline, comment, unassign, create-merged\n comment=None,\n )\n user = accounts.get_current_user()\n username = user['key'].split('/')[-1]\n if i.mrid: # We are updating an existing merge request\n if i.action == 'comment':\n if i.comment:\n CommunityEditsQueue.comment_request(i.mrid, username, i.comment)\n return delegate.RawText(\n json.dumps(response()), content_type=\"application/json\"\n )\n else:\n return delegate.RawText(\n json.dumps(\n response(\n status='error', error='No comment sent in request.'\n )\n )\n )\n elif i.action == 'claim':\n result = CommunityEditsQueue.assign_request(i.mrid, username)\n return delegate.RawText(\n json.dumps(response(**result)), content_type=\"application/json\"\n )\n elif i.action == 'unassign':\n CommunityEditsQueue.unassign_request(i.mrid)\n status = get_status_for_view(CommunityEditsQueue.STATUS['PENDING'])\n return delegate.RawText(json.dumps(response(newStatus=status)))\n else:\n if i.action == \"decline\":\n status = CommunityEditsQueue.STATUS['DECLINED']\n elif i.action == 'approve':\n status = CommunityEditsQueue.STATUS['MERGED']\n CommunityEditsQueue.update_request_status(\n i.mrid, status, username, comment=i.comment\n )\n return delegate.RawText(\n json.dumps(response()), content_type=\"application/json\"\n )\n elif i.rtype == \"merge-works\":\n if i.action == 'create':\n result = create_request(i.work_ids, username, i.comment)\n resp = (\n response(id=result)\n if result\n else response(\n status='error',\n error='A request to merge these works has already been submitted.',\n )\n )\n return delegate.RawText(\n json.dumps(resp), content_type=\"application/json\"\n )\n elif i.action == 'create-merged':\n result = CommunityEditsQueue.submit_work_merge_request(\n i.work_ids.split(','),\n submitter=username,\n reviewer=username,\n status=CommunityEditsQueue.STATUS['MERGED'],\n )\n return delegate.RawText(\n json.dumps(response(id=result)), content_type='application/json'\n )\n\n def GET(self):\n i = web.input(page=1, limit=25, mode=\"open\", submitter=None, reviewer=None)\n merge_requests = CommunityEditsQueue.get_requests(\n page=int(i.page),\n limit=int(i.limit),\n mode=i.mode,\n submitter=i.submitter,\n reviewer=i.reviewer,\n order='created desc',\n ).list()\n\n total_found = CommunityEditsQueue.get_counts_by_mode(\n mode=i.mode, submitter=i.submitter, reviewer=i.reviewer\n )\n return render_template(\n 'merge_queue/merge_queue',\n total_found,\n merge_requests=merge_requests,\n )\n\n def extract_olids(self, url):\n query_string = url.split('?')[1]\n split_params = query_string.split('&')\n params = {}\n for p in split_params:\n kv = p.split('=')\n params[kv[0]] = kv[1]\n return params['records'].split(',')\n\n\nclass ui_partials(delegate.page):\n path = '/merges/partials'\n\n def GET(self):\n i = web.input(type=None, comment='')\n if i.type == 'comment':\n component = render_template('merge_queue/comment', comment_str=i.comment)\n return delegate.RawText(component)\n\n\ndef setup():\n pass\n", "path": "openlibrary/plugins/upstream/edits.py"}]} | 2,035 | 323 |
gh_patches_debug_15287 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-642 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Expand Environment Variables in Cookiecutter Configuration File
I set my cookiecutterrc file via an environment variable, like this:
```
export COOKIECUTTER_CONFIG="$XDG_CONFIG_HOME/cookiecutter/cookiecutterrc"
```
In my cookiecutterrc, I'd like to use those same environment variables to set paths, however they don't currently expand:
```
default_context:
full_name: "Nathan Farrar"
email: "[email protected]"
github_username: "nfarrar"
cookiecutters_dir: "$XDG_CACHE_HOME/cookiecutter/template"
replay_dir: "$XDG_CACHE_HOME/cookiecutter/replay"
abbreviations:
pp: https://github.com/audreyr/cookiecutter-pypackage.git
gh: https://github.com/{0}.git
bb: https://bitbucket.org/{0}
```
For example:
```
$ cookiecutter pp
$ ls ~/
...
drwxr-xr-x 3 nfarrar staff 102 Feb 28 07:37 '$XDG_CACHE_HOME'
...
```
</issue>
<code>
[start of cookiecutter/config.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.config
6 -------------------
7
8 Global configuration handling
9 """
10
11 from __future__ import unicode_literals
12 import copy
13 import logging
14 import os
15 import io
16
17 import poyo
18
19 from .exceptions import ConfigDoesNotExistException
20 from .exceptions import InvalidConfiguration
21
22
23 logger = logging.getLogger(__name__)
24
25 USER_CONFIG_PATH = os.path.expanduser('~/.cookiecutterrc')
26
27 DEFAULT_CONFIG = {
28 'cookiecutters_dir': os.path.expanduser('~/.cookiecutters/'),
29 'replay_dir': os.path.expanduser('~/.cookiecutter_replay/'),
30 'default_context': {}
31 }
32
33
34 def get_config(config_path):
35 """
36 Retrieve the config from the specified path, returning it as a config dict.
37 """
38
39 if not os.path.exists(config_path):
40 raise ConfigDoesNotExistException
41
42 logger.debug('config_path is {0}'.format(config_path))
43 with io.open(config_path, encoding='utf-8') as file_handle:
44 try:
45 yaml_dict = poyo.parse_string(file_handle.read())
46 except poyo.exceptions.PoyoException as e:
47 raise InvalidConfiguration(
48 'Unable to parse YAML file {}. Error: {}'
49 ''.format(config_path, e)
50 )
51
52 config_dict = copy.copy(DEFAULT_CONFIG)
53 config_dict.update(yaml_dict)
54
55 return config_dict
56
57
58 def get_user_config(config_file=USER_CONFIG_PATH):
59 """Retrieve the config from a file or return the defaults if None is
60 passed. If an environment variable `COOKIECUTTER_CONFIG` is set up, try
61 to load its value. Otherwise fall back to a default file or config.
62 """
63 # Do NOT load a config. Return defaults instead.
64 if config_file is None:
65 return copy.copy(DEFAULT_CONFIG)
66
67 # Load the given config file
68 if config_file and config_file is not USER_CONFIG_PATH:
69 return get_config(config_file)
70
71 try:
72 # Does the user set up a config environment variable?
73 env_config_file = os.environ['COOKIECUTTER_CONFIG']
74 except KeyError:
75 # Load an optional user config if it exists
76 # otherwise return the defaults
77 if os.path.exists(USER_CONFIG_PATH):
78 return get_config(USER_CONFIG_PATH)
79 else:
80 return copy.copy(DEFAULT_CONFIG)
81 else:
82 # There is a config environment variable. Try to load it.
83 # Do not check for existence, so invalid file paths raise an error.
84 return get_config(env_config_file)
85
[end of cookiecutter/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/config.py b/cookiecutter/config.py
--- a/cookiecutter/config.py
+++ b/cookiecutter/config.py
@@ -31,6 +31,13 @@
}
+def _expand_path(path):
+ """Expand both environment variables and user home in the given path."""
+ path = os.path.expandvars(path)
+ path = os.path.expanduser(path)
+ return path
+
+
def get_config(config_path):
"""
Retrieve the config from the specified path, returning it as a config dict.
@@ -52,6 +59,12 @@
config_dict = copy.copy(DEFAULT_CONFIG)
config_dict.update(yaml_dict)
+ raw_replay_dir = config_dict['replay_dir']
+ config_dict['replay_dir'] = _expand_path(raw_replay_dir)
+
+ raw_cookies_dir = config_dict['cookiecutters_dir']
+ config_dict['cookiecutters_dir'] = _expand_path(raw_cookies_dir)
+
return config_dict
| {"golden_diff": "diff --git a/cookiecutter/config.py b/cookiecutter/config.py\n--- a/cookiecutter/config.py\n+++ b/cookiecutter/config.py\n@@ -31,6 +31,13 @@\n }\n \n \n+def _expand_path(path):\n+ \"\"\"Expand both environment variables and user home in the given path.\"\"\"\n+ path = os.path.expandvars(path)\n+ path = os.path.expanduser(path)\n+ return path\n+\n+\n def get_config(config_path):\n \"\"\"\n Retrieve the config from the specified path, returning it as a config dict.\n@@ -52,6 +59,12 @@\n config_dict = copy.copy(DEFAULT_CONFIG)\n config_dict.update(yaml_dict)\n \n+ raw_replay_dir = config_dict['replay_dir']\n+ config_dict['replay_dir'] = _expand_path(raw_replay_dir)\n+\n+ raw_cookies_dir = config_dict['cookiecutters_dir']\n+ config_dict['cookiecutters_dir'] = _expand_path(raw_cookies_dir)\n+\n return config_dict\n", "issue": "Expand Environment Variables in Cookiecutter Configuration File\nI set my cookiecutterrc file via an environment variable, like this:\n\n```\nexport COOKIECUTTER_CONFIG=\"$XDG_CONFIG_HOME/cookiecutter/cookiecutterrc\"\n```\n\nIn my cookiecutterrc, I'd like to use those same environment variables to set paths, however they don't currently expand:\n\n```\ndefault_context:\n full_name: \"Nathan Farrar\"\n email: \"[email protected]\"\n github_username: \"nfarrar\"\ncookiecutters_dir: \"$XDG_CACHE_HOME/cookiecutter/template\"\nreplay_dir: \"$XDG_CACHE_HOME/cookiecutter/replay\"\nabbreviations:\n pp: https://github.com/audreyr/cookiecutter-pypackage.git\n gh: https://github.com/{0}.git\n bb: https://bitbucket.org/{0}\n```\n\nFor example:\n\n```\n$ cookiecutter pp\n$ ls ~/\n...\ndrwxr-xr-x 3 nfarrar staff 102 Feb 28 07:37 '$XDG_CACHE_HOME'\n...\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.config\n-------------------\n\nGlobal configuration handling\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport copy\nimport logging\nimport os\nimport io\n\nimport poyo\n\nfrom .exceptions import ConfigDoesNotExistException\nfrom .exceptions import InvalidConfiguration\n\n\nlogger = logging.getLogger(__name__)\n\nUSER_CONFIG_PATH = os.path.expanduser('~/.cookiecutterrc')\n\nDEFAULT_CONFIG = {\n 'cookiecutters_dir': os.path.expanduser('~/.cookiecutters/'),\n 'replay_dir': os.path.expanduser('~/.cookiecutter_replay/'),\n 'default_context': {}\n}\n\n\ndef get_config(config_path):\n \"\"\"\n Retrieve the config from the specified path, returning it as a config dict.\n \"\"\"\n\n if not os.path.exists(config_path):\n raise ConfigDoesNotExistException\n\n logger.debug('config_path is {0}'.format(config_path))\n with io.open(config_path, encoding='utf-8') as file_handle:\n try:\n yaml_dict = poyo.parse_string(file_handle.read())\n except poyo.exceptions.PoyoException as e:\n raise InvalidConfiguration(\n 'Unable to parse YAML file {}. Error: {}'\n ''.format(config_path, e)\n )\n\n config_dict = copy.copy(DEFAULT_CONFIG)\n config_dict.update(yaml_dict)\n\n return config_dict\n\n\ndef get_user_config(config_file=USER_CONFIG_PATH):\n \"\"\"Retrieve the config from a file or return the defaults if None is\n passed. If an environment variable `COOKIECUTTER_CONFIG` is set up, try\n to load its value. Otherwise fall back to a default file or config.\n \"\"\"\n # Do NOT load a config. Return defaults instead.\n if config_file is None:\n return copy.copy(DEFAULT_CONFIG)\n\n # Load the given config file\n if config_file and config_file is not USER_CONFIG_PATH:\n return get_config(config_file)\n\n try:\n # Does the user set up a config environment variable?\n env_config_file = os.environ['COOKIECUTTER_CONFIG']\n except KeyError:\n # Load an optional user config if it exists\n # otherwise return the defaults\n if os.path.exists(USER_CONFIG_PATH):\n return get_config(USER_CONFIG_PATH)\n else:\n return copy.copy(DEFAULT_CONFIG)\n else:\n # There is a config environment variable. Try to load it.\n # Do not check for existence, so invalid file paths raise an error.\n return get_config(env_config_file)\n", "path": "cookiecutter/config.py"}]} | 1,486 | 226 |
gh_patches_debug_60945 | rasdani/github-patches | git_diff | Netflix__lemur-766 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set lemur to log to stdout
When running lemur inside docker I would like to have it log everything to `stdout` so that I can forward logs to splunk. At the moment `lemur.config.py` has a `LEMUR_LOG` parameter that expects a filename. Is there a way to configure lemur to log to stdout instead of a file?
</issue>
<code>
[start of lemur/factory.py]
1 """
2 .. module: lemur.factory
3 :platform: Unix
4 :synopsis: This module contains all the needed functions to allow
5 the factory app creation.
6
7 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
8 :license: Apache, see LICENSE for more details.
9 .. moduleauthor:: Kevin Glisson <[email protected]>
10
11 """
12 import os
13 import imp
14 import errno
15 import pkg_resources
16
17 from logging import Formatter, StreamHandler
18 from logging.handlers import RotatingFileHandler
19
20 from flask import Flask
21 from lemur.common.health import mod as health
22 from lemur.extensions import db, migrate, principal, smtp_mail, metrics
23
24
25 DEFAULT_BLUEPRINTS = (
26 health,
27 )
28
29 API_VERSION = 1
30
31
32 def create_app(app_name=None, blueprints=None, config=None):
33 """
34 Lemur application factory
35
36 :param config:
37 :param app_name:
38 :param blueprints:
39 :return:
40 """
41 if not blueprints:
42 blueprints = DEFAULT_BLUEPRINTS
43 else:
44 blueprints = blueprints + DEFAULT_BLUEPRINTS
45
46 if not app_name:
47 app_name = __name__
48
49 app = Flask(app_name)
50 configure_app(app, config)
51 configure_blueprints(app, blueprints)
52 configure_extensions(app)
53 configure_logging(app)
54 install_plugins(app)
55
56 @app.teardown_appcontext
57 def teardown(exception=None):
58 if db.session:
59 db.session.remove()
60
61 return app
62
63
64 def from_file(file_path, silent=False):
65 """
66 Updates the values in the config from a Python file. This function
67 behaves as if the file was imported as module with the
68
69 :param file_path:
70 :param silent:
71 """
72 d = imp.new_module('config')
73 d.__file__ = file_path
74 try:
75 with open(file_path) as config_file:
76 exec(compile(config_file.read(), # nosec: config file safe
77 file_path, 'exec'), d.__dict__)
78 except IOError as e:
79 if silent and e.errno in (errno.ENOENT, errno.EISDIR):
80 return False
81 e.strerror = 'Unable to load configuration file (%s)' % e.strerror
82 raise
83 return d
84
85
86 def configure_app(app, config=None):
87 """
88 Different ways of configuration
89
90 :param app:
91 :param config:
92 :return:
93 """
94 # respect the config first
95 if config and config != 'None':
96 app.config['CONFIG_PATH'] = config
97 app.config.from_object(from_file(config))
98 else:
99 try:
100 app.config.from_envvar("LEMUR_CONF")
101 except RuntimeError:
102 # look in default paths
103 if os.path.isfile(os.path.expanduser("~/.lemur/lemur.conf.py")):
104 app.config.from_object(from_file(os.path.expanduser("~/.lemur/lemur.conf.py")))
105 else:
106 app.config.from_object(from_file(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'default.conf.py')))
107
108 # we don't use this
109 app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
110
111
112 def configure_extensions(app):
113 """
114 Attaches and configures any needed flask extensions
115 to our app.
116
117 :param app:
118 """
119 db.init_app(app)
120 migrate.init_app(app, db)
121 principal.init_app(app)
122 smtp_mail.init_app(app)
123 metrics.init_app(app)
124
125
126 def configure_blueprints(app, blueprints):
127 """
128 We prefix our APIs with their given version so that we can support
129 multiple concurrent API versions.
130
131 :param app:
132 :param blueprints:
133 """
134 for blueprint in blueprints:
135 app.register_blueprint(blueprint, url_prefix="/api/{0}".format(API_VERSION))
136
137
138 def configure_logging(app):
139 """
140 Sets up application wide logging.
141
142 :param app:
143 """
144 handler = RotatingFileHandler(app.config.get('LOG_FILE', 'lemur.log'), maxBytes=10000000, backupCount=100)
145
146 handler.setFormatter(Formatter(
147 '%(asctime)s %(levelname)s: %(message)s '
148 '[in %(pathname)s:%(lineno)d]'
149 ))
150
151 handler.setLevel(app.config.get('LOG_LEVEL', 'DEBUG'))
152 app.logger.setLevel(app.config.get('LOG_LEVEL', 'DEBUG'))
153 app.logger.addHandler(handler)
154
155 stream_handler = StreamHandler()
156 stream_handler.setLevel(app.config.get('LOG_LEVEL'))
157 app.logger.addHandler(stream_handler)
158
159
160 def install_plugins(app):
161 """
162 Installs new issuers that are not currently bundled with Lemur.
163
164 :param app:
165 :return:
166 """
167 from lemur.plugins import plugins
168 from lemur.plugins.base import register
169 # entry_points={
170 # 'lemur.plugins': [
171 # 'verisign = lemur_verisign.plugin:VerisignPlugin'
172 # ],
173 # },
174 for ep in pkg_resources.iter_entry_points('lemur.plugins'):
175 try:
176 plugin = ep.load()
177 except Exception:
178 import traceback
179 app.logger.error("Failed to load plugin %r:\n%s\n" % (ep.name, traceback.format_exc()))
180 else:
181 register(plugin)
182
183 # ensure that we have some way to notify
184 with app.app_context():
185 try:
186 slug = app.config.get("LEMUR_DEFAULT_NOTIFICATION_PLUGIN", "email-notification")
187 plugins.get(slug)
188 except KeyError:
189 raise Exception("Unable to location notification plugin: {slug}. Ensure that LEMUR_DEFAULT_NOTIFICATION_PLUGIN is set to a valid and installed notification plugin.".format(slug=slug))
190
[end of lemur/factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lemur/factory.py b/lemur/factory.py
--- a/lemur/factory.py
+++ b/lemur/factory.py
@@ -153,7 +153,7 @@
app.logger.addHandler(handler)
stream_handler = StreamHandler()
- stream_handler.setLevel(app.config.get('LOG_LEVEL'))
+ stream_handler.setLevel(app.config.get('LOG_LEVEL', 'DEBUG'))
app.logger.addHandler(stream_handler)
| {"golden_diff": "diff --git a/lemur/factory.py b/lemur/factory.py\n--- a/lemur/factory.py\n+++ b/lemur/factory.py\n@@ -153,7 +153,7 @@\n app.logger.addHandler(handler)\n \n stream_handler = StreamHandler()\n- stream_handler.setLevel(app.config.get('LOG_LEVEL'))\n+ stream_handler.setLevel(app.config.get('LOG_LEVEL', 'DEBUG'))\n app.logger.addHandler(stream_handler)\n", "issue": "Set lemur to log to stdout\nWhen running lemur inside docker I would like to have it log everything to `stdout` so that I can forward logs to splunk. At the moment `lemur.config.py` has a `LEMUR_LOG` parameter that expects a filename. Is there a way to configure lemur to log to stdout instead of a file?\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.factory\n :platform: Unix\n :synopsis: This module contains all the needed functions to allow\n the factory app creation.\n\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\n\"\"\"\nimport os\nimport imp\nimport errno\nimport pkg_resources\n\nfrom logging import Formatter, StreamHandler\nfrom logging.handlers import RotatingFileHandler\n\nfrom flask import Flask\nfrom lemur.common.health import mod as health\nfrom lemur.extensions import db, migrate, principal, smtp_mail, metrics\n\n\nDEFAULT_BLUEPRINTS = (\n health,\n)\n\nAPI_VERSION = 1\n\n\ndef create_app(app_name=None, blueprints=None, config=None):\n \"\"\"\n Lemur application factory\n\n :param config:\n :param app_name:\n :param blueprints:\n :return:\n \"\"\"\n if not blueprints:\n blueprints = DEFAULT_BLUEPRINTS\n else:\n blueprints = blueprints + DEFAULT_BLUEPRINTS\n\n if not app_name:\n app_name = __name__\n\n app = Flask(app_name)\n configure_app(app, config)\n configure_blueprints(app, blueprints)\n configure_extensions(app)\n configure_logging(app)\n install_plugins(app)\n\n @app.teardown_appcontext\n def teardown(exception=None):\n if db.session:\n db.session.remove()\n\n return app\n\n\ndef from_file(file_path, silent=False):\n \"\"\"\n Updates the values in the config from a Python file. This function\n behaves as if the file was imported as module with the\n\n :param file_path:\n :param silent:\n \"\"\"\n d = imp.new_module('config')\n d.__file__ = file_path\n try:\n with open(file_path) as config_file:\n exec(compile(config_file.read(), # nosec: config file safe\n file_path, 'exec'), d.__dict__)\n except IOError as e:\n if silent and e.errno in (errno.ENOENT, errno.EISDIR):\n return False\n e.strerror = 'Unable to load configuration file (%s)' % e.strerror\n raise\n return d\n\n\ndef configure_app(app, config=None):\n \"\"\"\n Different ways of configuration\n\n :param app:\n :param config:\n :return:\n \"\"\"\n # respect the config first\n if config and config != 'None':\n app.config['CONFIG_PATH'] = config\n app.config.from_object(from_file(config))\n else:\n try:\n app.config.from_envvar(\"LEMUR_CONF\")\n except RuntimeError:\n # look in default paths\n if os.path.isfile(os.path.expanduser(\"~/.lemur/lemur.conf.py\")):\n app.config.from_object(from_file(os.path.expanduser(\"~/.lemur/lemur.conf.py\")))\n else:\n app.config.from_object(from_file(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'default.conf.py')))\n\n # we don't use this\n app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False\n\n\ndef configure_extensions(app):\n \"\"\"\n Attaches and configures any needed flask extensions\n to our app.\n\n :param app:\n \"\"\"\n db.init_app(app)\n migrate.init_app(app, db)\n principal.init_app(app)\n smtp_mail.init_app(app)\n metrics.init_app(app)\n\n\ndef configure_blueprints(app, blueprints):\n \"\"\"\n We prefix our APIs with their given version so that we can support\n multiple concurrent API versions.\n\n :param app:\n :param blueprints:\n \"\"\"\n for blueprint in blueprints:\n app.register_blueprint(blueprint, url_prefix=\"/api/{0}\".format(API_VERSION))\n\n\ndef configure_logging(app):\n \"\"\"\n Sets up application wide logging.\n\n :param app:\n \"\"\"\n handler = RotatingFileHandler(app.config.get('LOG_FILE', 'lemur.log'), maxBytes=10000000, backupCount=100)\n\n handler.setFormatter(Formatter(\n '%(asctime)s %(levelname)s: %(message)s '\n '[in %(pathname)s:%(lineno)d]'\n ))\n\n handler.setLevel(app.config.get('LOG_LEVEL', 'DEBUG'))\n app.logger.setLevel(app.config.get('LOG_LEVEL', 'DEBUG'))\n app.logger.addHandler(handler)\n\n stream_handler = StreamHandler()\n stream_handler.setLevel(app.config.get('LOG_LEVEL'))\n app.logger.addHandler(stream_handler)\n\n\ndef install_plugins(app):\n \"\"\"\n Installs new issuers that are not currently bundled with Lemur.\n\n :param app:\n :return:\n \"\"\"\n from lemur.plugins import plugins\n from lemur.plugins.base import register\n # entry_points={\n # 'lemur.plugins': [\n # 'verisign = lemur_verisign.plugin:VerisignPlugin'\n # ],\n # },\n for ep in pkg_resources.iter_entry_points('lemur.plugins'):\n try:\n plugin = ep.load()\n except Exception:\n import traceback\n app.logger.error(\"Failed to load plugin %r:\\n%s\\n\" % (ep.name, traceback.format_exc()))\n else:\n register(plugin)\n\n # ensure that we have some way to notify\n with app.app_context():\n try:\n slug = app.config.get(\"LEMUR_DEFAULT_NOTIFICATION_PLUGIN\", \"email-notification\")\n plugins.get(slug)\n except KeyError:\n raise Exception(\"Unable to location notification plugin: {slug}. Ensure that LEMUR_DEFAULT_NOTIFICATION_PLUGIN is set to a valid and installed notification plugin.\".format(slug=slug))\n", "path": "lemur/factory.py"}]} | 2,297 | 98 |
gh_patches_debug_29313 | rasdani/github-patches | git_diff | bokeh__bokeh-7934 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
flask_gunicorn_embed.py does not work with Tornado 5
ref: https://github.com/bokeh/bokeh/blob/master/examples/howto/server_embed/flask_gunicorn_embed.py
Running as is gets:
```
Exception in thread Thread-1:
Traceback (most recent call last):
File "/Users/bryanv/anaconda/envs/01216/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Users/bryanv/anaconda/envs/01216/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/Users/bryanv/work/bokeh/examples/howto/server_embed/flask_gunicorn_embed.py", line 72, in bk_worker
server.start()
File "/Users/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/bokeh/server/server.py", line 149, in start
self._tornado.start()
File "/Users/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/bokeh/server/tornado.py", line 372, in start
self._stats_job.start()
File "/Users/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/tornado/ioloop.py", line 1185, in start
self.io_loop = IOLoop.current()
File "/Use
rs/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/tornado/ioloop.py", line 282, in current
loop = asyncio.get_event_loop()
File "/Users/bryanv/anaconda/envs/01216/lib/python3.6/asyncio/events.py", line 694, in get_event_loop
return get_event_loop_policy().get_event_loop()
File "/Users/bryanv/anaconda/envs/01216/lib/python3.6/asyncio/events.py", line 602, in get_event_loop
% threading.current_thread().name)
RuntimeError: There is no current event loop in thread 'Thread-1'.
```
Tried changing worker to
```
def bk_worker():
io_loop = IOLoop())
server = BaseServer(io_loop, bokeh_tornado, bokeh_http)
server.start()
server.io_loop.start()
```
but then the http requests to the `HTTPServer` just hang (the workers are getting executed the right number of times though)
cc @bdarnell any quick ideas?
</issue>
<code>
[start of examples/howto/server_embed/flask_gunicorn_embed.py]
1 from flask import Flask, render_template
2
3 from tornado.httpserver import HTTPServer
4 from tornado.ioloop import IOLoop
5
6 from bokeh.application import Application
7 from bokeh.application.handlers import FunctionHandler
8 from bokeh.embed import server_document
9 from bokeh.layouts import column
10 from bokeh.models import ColumnDataSource, Slider
11 from bokeh.plotting import figure
12 from bokeh.server.server import BaseServer
13 from bokeh.server.tornado import BokehTornado
14 from bokeh.server.util import bind_sockets
15 from bokeh.themes import Theme
16
17 if __name__ == '__main__':
18 print('This script is intended to be run with gunicorn. e.g.')
19 print()
20 print(' gunicorn -w 4 flask_gunicorn_embed:app')
21 print()
22 print('will start the app on four processes')
23 import sys
24 sys.exit()
25
26 from bokeh.sampledata.sea_surface_temperature import sea_surface_temperature
27
28 app = Flask(__name__)
29
30 def modify_doc(doc):
31 df = sea_surface_temperature.copy()
32 source = ColumnDataSource(data=df)
33
34 plot = figure(x_axis_type='datetime', y_range=(0, 25), y_axis_label='Temperature (Celsius)',
35 title="Sea Surface Temperature at 43.18, -70.43")
36 plot.line('time', 'temperature', source=source)
37
38 def callback(attr, old, new):
39 if new == 0:
40 data = df
41 else:
42 data = df.rolling('{0}D'.format(new)).mean()
43 source.data = ColumnDataSource(data=data).data
44
45 slider = Slider(start=0, end=30, value=0, step=1, title="Smoothing by N Days")
46 slider.on_change('value', callback)
47
48 doc.add_root(column(slider, plot))
49
50 doc.theme = Theme(filename="theme.yaml")
51
52 # can't use shortcuts here, since we are passing to low level BokehTornado
53 bkapp = Application(FunctionHandler(modify_doc))
54
55 bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=["localhost:8000"])
56 bokeh_http = HTTPServer(bokeh_tornado)
57
58 # This is so that if this app is run using something like "gunicorn -w 4" then
59 # each process will listen on its own port
60 sockets, port = bind_sockets("localhost", 0)
61 bokeh_http.add_sockets(sockets)
62
63 @app.route('/', methods=['GET'])
64 def bkapp_page():
65 script = server_document('http://localhost:%d/bkapp' % port)
66 return render_template("embed.html", script=script, template="Flask")
67
68 def bk_worker():
69 io_loop = IOLoop.current()
70 server = BaseServer(io_loop, bokeh_tornado, bokeh_http)
71 server.start()
72 server.io_loop.start()
73
74 from threading import Thread
75 Thread(target=bk_worker).start()
76
[end of examples/howto/server_embed/flask_gunicorn_embed.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/howto/server_embed/flask_gunicorn_embed.py b/examples/howto/server_embed/flask_gunicorn_embed.py
--- a/examples/howto/server_embed/flask_gunicorn_embed.py
+++ b/examples/howto/server_embed/flask_gunicorn_embed.py
@@ -1,3 +1,8 @@
+try:
+ import asyncio
+except ImportError:
+ raise RuntimeError("This example requries Python3 / asyncio")
+
from flask import Flask, render_template
from tornado.httpserver import HTTPServer
@@ -52,13 +57,9 @@
# can't use shortcuts here, since we are passing to low level BokehTornado
bkapp = Application(FunctionHandler(modify_doc))
-bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=["localhost:8000"])
-bokeh_http = HTTPServer(bokeh_tornado)
-
# This is so that if this app is run using something like "gunicorn -w 4" then
# each process will listen on its own port
sockets, port = bind_sockets("localhost", 0)
-bokeh_http.add_sockets(sockets)
@app.route('/', methods=['GET'])
def bkapp_page():
@@ -66,8 +67,13 @@
return render_template("embed.html", script=script, template="Flask")
def bk_worker():
- io_loop = IOLoop.current()
- server = BaseServer(io_loop, bokeh_tornado, bokeh_http)
+ asyncio.set_event_loop(asyncio.new_event_loop())
+
+ bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=["localhost:8000"])
+ bokeh_http = HTTPServer(bokeh_tornado)
+ bokeh_http.add_sockets(sockets)
+
+ server = BaseServer(IOLoop.current(), bokeh_tornado, bokeh_http)
server.start()
server.io_loop.start()
| {"golden_diff": "diff --git a/examples/howto/server_embed/flask_gunicorn_embed.py b/examples/howto/server_embed/flask_gunicorn_embed.py\n--- a/examples/howto/server_embed/flask_gunicorn_embed.py\n+++ b/examples/howto/server_embed/flask_gunicorn_embed.py\n@@ -1,3 +1,8 @@\n+try:\n+ import asyncio\n+except ImportError:\n+ raise RuntimeError(\"This example requries Python3 / asyncio\")\n+\n from flask import Flask, render_template\n \n from tornado.httpserver import HTTPServer\n@@ -52,13 +57,9 @@\n # can't use shortcuts here, since we are passing to low level BokehTornado\n bkapp = Application(FunctionHandler(modify_doc))\n \n-bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=[\"localhost:8000\"])\n-bokeh_http = HTTPServer(bokeh_tornado)\n-\n # This is so that if this app is run using something like \"gunicorn -w 4\" then\n # each process will listen on its own port\n sockets, port = bind_sockets(\"localhost\", 0)\n-bokeh_http.add_sockets(sockets)\n \n @app.route('/', methods=['GET'])\n def bkapp_page():\n@@ -66,8 +67,13 @@\n return render_template(\"embed.html\", script=script, template=\"Flask\")\n \n def bk_worker():\n- io_loop = IOLoop.current()\n- server = BaseServer(io_loop, bokeh_tornado, bokeh_http)\n+ asyncio.set_event_loop(asyncio.new_event_loop())\n+\n+ bokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=[\"localhost:8000\"])\n+ bokeh_http = HTTPServer(bokeh_tornado)\n+ bokeh_http.add_sockets(sockets)\n+\n+ server = BaseServer(IOLoop.current(), bokeh_tornado, bokeh_http)\n server.start()\n server.io_loop.start()\n", "issue": "flask_gunicorn_embed.py does not work with Tornado 5\nref: https://github.com/bokeh/bokeh/blob/master/examples/howto/server_embed/flask_gunicorn_embed.py\r\n\r\nRunning as is gets:\r\n```\r\nException in thread Thread-1:\r\nTraceback (most recent call last):\r\n File \"/Users/bryanv/anaconda/envs/01216/lib/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/Users/bryanv/anaconda/envs/01216/lib/python3.6/threading.py\", line 864, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/Users/bryanv/work/bokeh/examples/howto/server_embed/flask_gunicorn_embed.py\", line 72, in bk_worker\r\n server.start()\r\n File \"/Users/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/bokeh/server/server.py\", line 149, in start\r\n self._tornado.start()\r\n File \"/Users/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/bokeh/server/tornado.py\", line 372, in start\r\n self._stats_job.start()\r\n File \"/Users/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/tornado/ioloop.py\", line 1185, in start\r\n self.io_loop = IOLoop.current()\r\n File \"/Use\r\nrs/bryanv/anaconda/envs/01216/lib/python3.6/site-packages/tornado/ioloop.py\", line 282, in current\r\n loop = asyncio.get_event_loop()\r\n File \"/Users/bryanv/anaconda/envs/01216/lib/python3.6/asyncio/events.py\", line 694, in get_event_loop\r\n return get_event_loop_policy().get_event_loop()\r\n File \"/Users/bryanv/anaconda/envs/01216/lib/python3.6/asyncio/events.py\", line 602, in get_event_loop\r\n % threading.current_thread().name)\r\nRuntimeError: There is no current event loop in thread 'Thread-1'.\r\n```\r\n\r\nTried changing worker to \r\n```\r\ndef bk_worker():\r\n io_loop = IOLoop())\r\n server = BaseServer(io_loop, bokeh_tornado, bokeh_http)\r\n server.start()\r\n server.io_loop.start()\r\n```\r\n\r\n\r\nbut then the http requests to the `HTTPServer` just hang (the workers are getting executed the right number of times though)\r\n\r\ncc @bdarnell any quick ideas?\n", "before_files": [{"content": "from flask import Flask, render_template\n\nfrom tornado.httpserver import HTTPServer\nfrom tornado.ioloop import IOLoop\n\nfrom bokeh.application import Application\nfrom bokeh.application.handlers import FunctionHandler\nfrom bokeh.embed import server_document\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, Slider\nfrom bokeh.plotting import figure\nfrom bokeh.server.server import BaseServer\nfrom bokeh.server.tornado import BokehTornado\nfrom bokeh.server.util import bind_sockets\nfrom bokeh.themes import Theme\n\nif __name__ == '__main__':\n print('This script is intended to be run with gunicorn. e.g.')\n print()\n print(' gunicorn -w 4 flask_gunicorn_embed:app')\n print()\n print('will start the app on four processes')\n import sys\n sys.exit()\n\nfrom bokeh.sampledata.sea_surface_temperature import sea_surface_temperature\n\napp = Flask(__name__)\n\ndef modify_doc(doc):\n df = sea_surface_temperature.copy()\n source = ColumnDataSource(data=df)\n\n plot = figure(x_axis_type='datetime', y_range=(0, 25), y_axis_label='Temperature (Celsius)',\n title=\"Sea Surface Temperature at 43.18, -70.43\")\n plot.line('time', 'temperature', source=source)\n\n def callback(attr, old, new):\n if new == 0:\n data = df\n else:\n data = df.rolling('{0}D'.format(new)).mean()\n source.data = ColumnDataSource(data=data).data\n\n slider = Slider(start=0, end=30, value=0, step=1, title=\"Smoothing by N Days\")\n slider.on_change('value', callback)\n\n doc.add_root(column(slider, plot))\n\n doc.theme = Theme(filename=\"theme.yaml\")\n\n# can't use shortcuts here, since we are passing to low level BokehTornado\nbkapp = Application(FunctionHandler(modify_doc))\n\nbokeh_tornado = BokehTornado({'/bkapp': bkapp}, extra_websocket_origins=[\"localhost:8000\"])\nbokeh_http = HTTPServer(bokeh_tornado)\n\n# This is so that if this app is run using something like \"gunicorn -w 4\" then\n# each process will listen on its own port\nsockets, port = bind_sockets(\"localhost\", 0)\nbokeh_http.add_sockets(sockets)\n\[email protected]('/', methods=['GET'])\ndef bkapp_page():\n script = server_document('http://localhost:%d/bkapp' % port)\n return render_template(\"embed.html\", script=script, template=\"Flask\")\n\ndef bk_worker():\n io_loop = IOLoop.current()\n server = BaseServer(io_loop, bokeh_tornado, bokeh_http)\n server.start()\n server.io_loop.start()\n\nfrom threading import Thread\nThread(target=bk_worker).start()\n", "path": "examples/howto/server_embed/flask_gunicorn_embed.py"}]} | 1,920 | 428 |
gh_patches_debug_19411 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-1668 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'readPreference' param in config URI was ignored in mongoengine.register_connection
I have a config string `'mongodb://mongodb01.test.vpc,mongodb02.test.vpc,mongodb03.test.vpc/prod?readPreference=secondaryPreferred'`
it was parsed to dict below by pymongo.uri_parser.parse_uri
```
{'collection': None,
'database': 'prod',
'nodelist': [('mongodb01.test.vpc', 27017),
('mongodb02.test.vpc', 27017),
('mongodb03.test.vpc', 27017)],
'options': {'readpreference': 'secondaryPreferred'},
'password': None,
'username': None}
```
but mongoengine only read in 3 params if I read correctly, ignored 'readpreference'
```
if 'replicaset' in uri_options:
conn_settings['replicaSet'] = uri_options['replicaset']
if 'authsource' in uri_options:
conn_settings['authentication_source'] = uri_options['authsource']
if 'authmechanism' in uri_options:
conn_settings['authentication_mechanism'] = uri_options['authmechanism']
```
thus made my config not functioning as I needed.
I don't know if this is a bug or a feature. To achieve my goal do I have to explicitly invoke `connect(..., read_preference=secondaryPreferred)`? Isn't controlling readpreference in a config string be more flexible?
</issue>
<code>
[start of mongoengine/connection.py]
1 from pymongo import MongoClient, ReadPreference, uri_parser
2 import six
3
4 from mongoengine.python_support import IS_PYMONGO_3
5
6 __all__ = ['MongoEngineConnectionError', 'connect', 'register_connection',
7 'DEFAULT_CONNECTION_NAME']
8
9
10 DEFAULT_CONNECTION_NAME = 'default'
11
12 if IS_PYMONGO_3:
13 READ_PREFERENCE = ReadPreference.PRIMARY
14 else:
15 from pymongo import MongoReplicaSetClient
16 READ_PREFERENCE = False
17
18
19 class MongoEngineConnectionError(Exception):
20 """Error raised when the database connection can't be established or
21 when a connection with a requested alias can't be retrieved.
22 """
23 pass
24
25
26 _connection_settings = {}
27 _connections = {}
28 _dbs = {}
29
30
31 def register_connection(alias, name=None, host=None, port=None,
32 read_preference=READ_PREFERENCE,
33 username=None, password=None,
34 authentication_source=None,
35 authentication_mechanism=None,
36 **kwargs):
37 """Add a connection.
38
39 :param alias: the name that will be used to refer to this connection
40 throughout MongoEngine
41 :param name: the name of the specific database to use
42 :param host: the host name of the :program:`mongod` instance to connect to
43 :param port: the port that the :program:`mongod` instance is running on
44 :param read_preference: The read preference for the collection
45 ** Added pymongo 2.1
46 :param username: username to authenticate with
47 :param password: password to authenticate with
48 :param authentication_source: database to authenticate against
49 :param authentication_mechanism: database authentication mechanisms.
50 By default, use SCRAM-SHA-1 with MongoDB 3.0 and later,
51 MONGODB-CR (MongoDB Challenge Response protocol) for older servers.
52 :param is_mock: explicitly use mongomock for this connection
53 (can also be done by using `mongomock://` as db host prefix)
54 :param kwargs: ad-hoc parameters to be passed into the pymongo driver,
55 for example maxpoolsize, tz_aware, etc. See the documentation
56 for pymongo's `MongoClient` for a full list.
57
58 .. versionchanged:: 0.10.6 - added mongomock support
59 """
60 conn_settings = {
61 'name': name or 'test',
62 'host': host or 'localhost',
63 'port': port or 27017,
64 'read_preference': read_preference,
65 'username': username,
66 'password': password,
67 'authentication_source': authentication_source,
68 'authentication_mechanism': authentication_mechanism
69 }
70
71 conn_host = conn_settings['host']
72
73 # Host can be a list or a string, so if string, force to a list.
74 if isinstance(conn_host, six.string_types):
75 conn_host = [conn_host]
76
77 resolved_hosts = []
78 for entity in conn_host:
79
80 # Handle Mongomock
81 if entity.startswith('mongomock://'):
82 conn_settings['is_mock'] = True
83 # `mongomock://` is not a valid url prefix and must be replaced by `mongodb://`
84 resolved_hosts.append(entity.replace('mongomock://', 'mongodb://', 1))
85
86 # Handle URI style connections, only updating connection params which
87 # were explicitly specified in the URI.
88 elif '://' in entity:
89 uri_dict = uri_parser.parse_uri(entity)
90 resolved_hosts.append(entity)
91
92 if uri_dict.get('database'):
93 conn_settings['name'] = uri_dict.get('database')
94
95 for param in ('read_preference', 'username', 'password'):
96 if uri_dict.get(param):
97 conn_settings[param] = uri_dict[param]
98
99 uri_options = uri_dict['options']
100 if 'replicaset' in uri_options:
101 conn_settings['replicaSet'] = uri_options['replicaset']
102 if 'authsource' in uri_options:
103 conn_settings['authentication_source'] = uri_options['authsource']
104 if 'authmechanism' in uri_options:
105 conn_settings['authentication_mechanism'] = uri_options['authmechanism']
106 else:
107 resolved_hosts.append(entity)
108 conn_settings['host'] = resolved_hosts
109
110 # Deprecated parameters that should not be passed on
111 kwargs.pop('slaves', None)
112 kwargs.pop('is_slave', None)
113
114 conn_settings.update(kwargs)
115 _connection_settings[alias] = conn_settings
116
117
118 def disconnect(alias=DEFAULT_CONNECTION_NAME):
119 """Close the connection with a given alias."""
120 if alias in _connections:
121 get_connection(alias=alias).close()
122 del _connections[alias]
123 if alias in _dbs:
124 del _dbs[alias]
125
126
127 def get_connection(alias=DEFAULT_CONNECTION_NAME, reconnect=False):
128 """Return a connection with a given alias."""
129
130 # Connect to the database if not already connected
131 if reconnect:
132 disconnect(alias)
133
134 # If the requested alias already exists in the _connections list, return
135 # it immediately.
136 if alias in _connections:
137 return _connections[alias]
138
139 # Validate that the requested alias exists in the _connection_settings.
140 # Raise MongoEngineConnectionError if it doesn't.
141 if alias not in _connection_settings:
142 if alias == DEFAULT_CONNECTION_NAME:
143 msg = 'You have not defined a default connection'
144 else:
145 msg = 'Connection with alias "%s" has not been defined' % alias
146 raise MongoEngineConnectionError(msg)
147
148 def _clean_settings(settings_dict):
149 # set literal more efficient than calling set function
150 irrelevant_fields_set = {
151 'name', 'username', 'password',
152 'authentication_source', 'authentication_mechanism'
153 }
154 return {
155 k: v for k, v in settings_dict.items()
156 if k not in irrelevant_fields_set
157 }
158
159 # Retrieve a copy of the connection settings associated with the requested
160 # alias and remove the database name and authentication info (we don't
161 # care about them at this point).
162 conn_settings = _clean_settings(_connection_settings[alias].copy())
163
164 # Determine if we should use PyMongo's or mongomock's MongoClient.
165 is_mock = conn_settings.pop('is_mock', False)
166 if is_mock:
167 try:
168 import mongomock
169 except ImportError:
170 raise RuntimeError('You need mongomock installed to mock '
171 'MongoEngine.')
172 connection_class = mongomock.MongoClient
173 else:
174 connection_class = MongoClient
175
176 # For replica set connections with PyMongo 2.x, use
177 # MongoReplicaSetClient.
178 # TODO remove this once we stop supporting PyMongo 2.x.
179 if 'replicaSet' in conn_settings and not IS_PYMONGO_3:
180 connection_class = MongoReplicaSetClient
181 conn_settings['hosts_or_uri'] = conn_settings.pop('host', None)
182
183 # hosts_or_uri has to be a string, so if 'host' was provided
184 # as a list, join its parts and separate them by ','
185 if isinstance(conn_settings['hosts_or_uri'], list):
186 conn_settings['hosts_or_uri'] = ','.join(
187 conn_settings['hosts_or_uri'])
188
189 # Discard port since it can't be used on MongoReplicaSetClient
190 conn_settings.pop('port', None)
191
192 # Iterate over all of the connection settings and if a connection with
193 # the same parameters is already established, use it instead of creating
194 # a new one.
195 existing_connection = None
196 connection_settings_iterator = (
197 (db_alias, settings.copy())
198 for db_alias, settings in _connection_settings.items()
199 )
200 for db_alias, connection_settings in connection_settings_iterator:
201 connection_settings = _clean_settings(connection_settings)
202 if conn_settings == connection_settings and _connections.get(db_alias):
203 existing_connection = _connections[db_alias]
204 break
205
206 # If an existing connection was found, assign it to the new alias
207 if existing_connection:
208 _connections[alias] = existing_connection
209 else:
210 # Otherwise, create the new connection for this alias. Raise
211 # MongoEngineConnectionError if it can't be established.
212 try:
213 _connections[alias] = connection_class(**conn_settings)
214 except Exception as e:
215 raise MongoEngineConnectionError(
216 'Cannot connect to database %s :\n%s' % (alias, e))
217
218 return _connections[alias]
219
220
221 def get_db(alias=DEFAULT_CONNECTION_NAME, reconnect=False):
222 if reconnect:
223 disconnect(alias)
224
225 if alias not in _dbs:
226 conn = get_connection(alias)
227 conn_settings = _connection_settings[alias]
228 db = conn[conn_settings['name']]
229 auth_kwargs = {'source': conn_settings['authentication_source']}
230 if conn_settings['authentication_mechanism'] is not None:
231 auth_kwargs['mechanism'] = conn_settings['authentication_mechanism']
232 # Authenticate if necessary
233 if conn_settings['username'] and (conn_settings['password'] or
234 conn_settings['authentication_mechanism'] == 'MONGODB-X509'):
235 db.authenticate(conn_settings['username'], conn_settings['password'], **auth_kwargs)
236 _dbs[alias] = db
237 return _dbs[alias]
238
239
240 def connect(db=None, alias=DEFAULT_CONNECTION_NAME, **kwargs):
241 """Connect to the database specified by the 'db' argument.
242
243 Connection settings may be provided here as well if the database is not
244 running on the default port on localhost. If authentication is needed,
245 provide username and password arguments as well.
246
247 Multiple databases are supported by using aliases. Provide a separate
248 `alias` to connect to a different instance of :program:`mongod`.
249
250 See the docstring for `register_connection` for more details about all
251 supported kwargs.
252
253 .. versionchanged:: 0.6 - added multiple database support.
254 """
255 if alias not in _connections:
256 register_connection(alias, db, **kwargs)
257
258 return get_connection(alias)
259
260
261 # Support old naming convention
262 _get_connection = get_connection
263 _get_db = get_db
264
[end of mongoengine/connection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mongoengine/connection.py b/mongoengine/connection.py
--- a/mongoengine/connection.py
+++ b/mongoengine/connection.py
@@ -103,6 +103,18 @@
conn_settings['authentication_source'] = uri_options['authsource']
if 'authmechanism' in uri_options:
conn_settings['authentication_mechanism'] = uri_options['authmechanism']
+ if IS_PYMONGO_3 and 'readpreference' in uri_options:
+ read_preferences = (
+ ReadPreference.NEAREST,
+ ReadPreference.PRIMARY,
+ ReadPreference.PRIMARY_PREFERRED,
+ ReadPreference.SECONDARY,
+ ReadPreference.SECONDARY_PREFERRED)
+ read_pf_mode = uri_options['readpreference'].lower()
+ for preference in read_preferences:
+ if preference.name.lower() == read_pf_mode:
+ conn_settings['read_preference'] = preference
+ break
else:
resolved_hosts.append(entity)
conn_settings['host'] = resolved_hosts
| {"golden_diff": "diff --git a/mongoengine/connection.py b/mongoengine/connection.py\n--- a/mongoengine/connection.py\n+++ b/mongoengine/connection.py\n@@ -103,6 +103,18 @@\n conn_settings['authentication_source'] = uri_options['authsource']\n if 'authmechanism' in uri_options:\n conn_settings['authentication_mechanism'] = uri_options['authmechanism']\n+ if IS_PYMONGO_3 and 'readpreference' in uri_options:\n+ read_preferences = (\n+ ReadPreference.NEAREST,\n+ ReadPreference.PRIMARY,\n+ ReadPreference.PRIMARY_PREFERRED,\n+ ReadPreference.SECONDARY,\n+ ReadPreference.SECONDARY_PREFERRED)\n+ read_pf_mode = uri_options['readpreference'].lower()\n+ for preference in read_preferences:\n+ if preference.name.lower() == read_pf_mode:\n+ conn_settings['read_preference'] = preference\n+ break\n else:\n resolved_hosts.append(entity)\n conn_settings['host'] = resolved_hosts\n", "issue": "'readPreference' param in config URI was ignored in mongoengine.register_connection\nI have a config string `'mongodb://mongodb01.test.vpc,mongodb02.test.vpc,mongodb03.test.vpc/prod?readPreference=secondaryPreferred'`\r\n\r\nit was parsed to dict below by pymongo.uri_parser.parse_uri\r\n```\r\n{'collection': None,\r\n 'database': 'prod',\r\n 'nodelist': [('mongodb01.test.vpc', 27017),\r\n ('mongodb02.test.vpc', 27017),\r\n ('mongodb03.test.vpc', 27017)],\r\n 'options': {'readpreference': 'secondaryPreferred'},\r\n 'password': None,\r\n 'username': None}\r\n```\r\nbut mongoengine only read in 3 params if I read correctly, ignored 'readpreference'\r\n```\r\n if 'replicaset' in uri_options:\r\n conn_settings['replicaSet'] = uri_options['replicaset']\r\n if 'authsource' in uri_options:\r\n conn_settings['authentication_source'] = uri_options['authsource']\r\n if 'authmechanism' in uri_options:\r\n conn_settings['authentication_mechanism'] = uri_options['authmechanism']\r\n```\r\nthus made my config not functioning as I needed.\r\n\r\n I don't know if this is a bug or a feature. To achieve my goal do I have to explicitly invoke `connect(..., read_preference=secondaryPreferred)`? Isn't controlling readpreference in a config string be more flexible?\n", "before_files": [{"content": "from pymongo import MongoClient, ReadPreference, uri_parser\nimport six\n\nfrom mongoengine.python_support import IS_PYMONGO_3\n\n__all__ = ['MongoEngineConnectionError', 'connect', 'register_connection',\n 'DEFAULT_CONNECTION_NAME']\n\n\nDEFAULT_CONNECTION_NAME = 'default'\n\nif IS_PYMONGO_3:\n READ_PREFERENCE = ReadPreference.PRIMARY\nelse:\n from pymongo import MongoReplicaSetClient\n READ_PREFERENCE = False\n\n\nclass MongoEngineConnectionError(Exception):\n \"\"\"Error raised when the database connection can't be established or\n when a connection with a requested alias can't be retrieved.\n \"\"\"\n pass\n\n\n_connection_settings = {}\n_connections = {}\n_dbs = {}\n\n\ndef register_connection(alias, name=None, host=None, port=None,\n read_preference=READ_PREFERENCE,\n username=None, password=None,\n authentication_source=None,\n authentication_mechanism=None,\n **kwargs):\n \"\"\"Add a connection.\n\n :param alias: the name that will be used to refer to this connection\n throughout MongoEngine\n :param name: the name of the specific database to use\n :param host: the host name of the :program:`mongod` instance to connect to\n :param port: the port that the :program:`mongod` instance is running on\n :param read_preference: The read preference for the collection\n ** Added pymongo 2.1\n :param username: username to authenticate with\n :param password: password to authenticate with\n :param authentication_source: database to authenticate against\n :param authentication_mechanism: database authentication mechanisms.\n By default, use SCRAM-SHA-1 with MongoDB 3.0 and later,\n MONGODB-CR (MongoDB Challenge Response protocol) for older servers.\n :param is_mock: explicitly use mongomock for this connection\n (can also be done by using `mongomock://` as db host prefix)\n :param kwargs: ad-hoc parameters to be passed into the pymongo driver,\n for example maxpoolsize, tz_aware, etc. See the documentation\n for pymongo's `MongoClient` for a full list.\n\n .. versionchanged:: 0.10.6 - added mongomock support\n \"\"\"\n conn_settings = {\n 'name': name or 'test',\n 'host': host or 'localhost',\n 'port': port or 27017,\n 'read_preference': read_preference,\n 'username': username,\n 'password': password,\n 'authentication_source': authentication_source,\n 'authentication_mechanism': authentication_mechanism\n }\n\n conn_host = conn_settings['host']\n\n # Host can be a list or a string, so if string, force to a list.\n if isinstance(conn_host, six.string_types):\n conn_host = [conn_host]\n\n resolved_hosts = []\n for entity in conn_host:\n\n # Handle Mongomock\n if entity.startswith('mongomock://'):\n conn_settings['is_mock'] = True\n # `mongomock://` is not a valid url prefix and must be replaced by `mongodb://`\n resolved_hosts.append(entity.replace('mongomock://', 'mongodb://', 1))\n\n # Handle URI style connections, only updating connection params which\n # were explicitly specified in the URI.\n elif '://' in entity:\n uri_dict = uri_parser.parse_uri(entity)\n resolved_hosts.append(entity)\n\n if uri_dict.get('database'):\n conn_settings['name'] = uri_dict.get('database')\n\n for param in ('read_preference', 'username', 'password'):\n if uri_dict.get(param):\n conn_settings[param] = uri_dict[param]\n\n uri_options = uri_dict['options']\n if 'replicaset' in uri_options:\n conn_settings['replicaSet'] = uri_options['replicaset']\n if 'authsource' in uri_options:\n conn_settings['authentication_source'] = uri_options['authsource']\n if 'authmechanism' in uri_options:\n conn_settings['authentication_mechanism'] = uri_options['authmechanism']\n else:\n resolved_hosts.append(entity)\n conn_settings['host'] = resolved_hosts\n\n # Deprecated parameters that should not be passed on\n kwargs.pop('slaves', None)\n kwargs.pop('is_slave', None)\n\n conn_settings.update(kwargs)\n _connection_settings[alias] = conn_settings\n\n\ndef disconnect(alias=DEFAULT_CONNECTION_NAME):\n \"\"\"Close the connection with a given alias.\"\"\"\n if alias in _connections:\n get_connection(alias=alias).close()\n del _connections[alias]\n if alias in _dbs:\n del _dbs[alias]\n\n\ndef get_connection(alias=DEFAULT_CONNECTION_NAME, reconnect=False):\n \"\"\"Return a connection with a given alias.\"\"\"\n\n # Connect to the database if not already connected\n if reconnect:\n disconnect(alias)\n\n # If the requested alias already exists in the _connections list, return\n # it immediately.\n if alias in _connections:\n return _connections[alias]\n\n # Validate that the requested alias exists in the _connection_settings.\n # Raise MongoEngineConnectionError if it doesn't.\n if alias not in _connection_settings:\n if alias == DEFAULT_CONNECTION_NAME:\n msg = 'You have not defined a default connection'\n else:\n msg = 'Connection with alias \"%s\" has not been defined' % alias\n raise MongoEngineConnectionError(msg)\n\n def _clean_settings(settings_dict):\n # set literal more efficient than calling set function\n irrelevant_fields_set = {\n 'name', 'username', 'password',\n 'authentication_source', 'authentication_mechanism'\n }\n return {\n k: v for k, v in settings_dict.items()\n if k not in irrelevant_fields_set\n }\n\n # Retrieve a copy of the connection settings associated with the requested\n # alias and remove the database name and authentication info (we don't\n # care about them at this point).\n conn_settings = _clean_settings(_connection_settings[alias].copy())\n\n # Determine if we should use PyMongo's or mongomock's MongoClient.\n is_mock = conn_settings.pop('is_mock', False)\n if is_mock:\n try:\n import mongomock\n except ImportError:\n raise RuntimeError('You need mongomock installed to mock '\n 'MongoEngine.')\n connection_class = mongomock.MongoClient\n else:\n connection_class = MongoClient\n\n # For replica set connections with PyMongo 2.x, use\n # MongoReplicaSetClient.\n # TODO remove this once we stop supporting PyMongo 2.x.\n if 'replicaSet' in conn_settings and not IS_PYMONGO_3:\n connection_class = MongoReplicaSetClient\n conn_settings['hosts_or_uri'] = conn_settings.pop('host', None)\n\n # hosts_or_uri has to be a string, so if 'host' was provided\n # as a list, join its parts and separate them by ','\n if isinstance(conn_settings['hosts_or_uri'], list):\n conn_settings['hosts_or_uri'] = ','.join(\n conn_settings['hosts_or_uri'])\n\n # Discard port since it can't be used on MongoReplicaSetClient\n conn_settings.pop('port', None)\n\n # Iterate over all of the connection settings and if a connection with\n # the same parameters is already established, use it instead of creating\n # a new one.\n existing_connection = None\n connection_settings_iterator = (\n (db_alias, settings.copy())\n for db_alias, settings in _connection_settings.items()\n )\n for db_alias, connection_settings in connection_settings_iterator:\n connection_settings = _clean_settings(connection_settings)\n if conn_settings == connection_settings and _connections.get(db_alias):\n existing_connection = _connections[db_alias]\n break\n\n # If an existing connection was found, assign it to the new alias\n if existing_connection:\n _connections[alias] = existing_connection\n else:\n # Otherwise, create the new connection for this alias. Raise\n # MongoEngineConnectionError if it can't be established.\n try:\n _connections[alias] = connection_class(**conn_settings)\n except Exception as e:\n raise MongoEngineConnectionError(\n 'Cannot connect to database %s :\\n%s' % (alias, e))\n\n return _connections[alias]\n\n\ndef get_db(alias=DEFAULT_CONNECTION_NAME, reconnect=False):\n if reconnect:\n disconnect(alias)\n\n if alias not in _dbs:\n conn = get_connection(alias)\n conn_settings = _connection_settings[alias]\n db = conn[conn_settings['name']]\n auth_kwargs = {'source': conn_settings['authentication_source']}\n if conn_settings['authentication_mechanism'] is not None:\n auth_kwargs['mechanism'] = conn_settings['authentication_mechanism']\n # Authenticate if necessary\n if conn_settings['username'] and (conn_settings['password'] or\n conn_settings['authentication_mechanism'] == 'MONGODB-X509'):\n db.authenticate(conn_settings['username'], conn_settings['password'], **auth_kwargs)\n _dbs[alias] = db\n return _dbs[alias]\n\n\ndef connect(db=None, alias=DEFAULT_CONNECTION_NAME, **kwargs):\n \"\"\"Connect to the database specified by the 'db' argument.\n\n Connection settings may be provided here as well if the database is not\n running on the default port on localhost. If authentication is needed,\n provide username and password arguments as well.\n\n Multiple databases are supported by using aliases. Provide a separate\n `alias` to connect to a different instance of :program:`mongod`.\n\n See the docstring for `register_connection` for more details about all\n supported kwargs.\n\n .. versionchanged:: 0.6 - added multiple database support.\n \"\"\"\n if alias not in _connections:\n register_connection(alias, db, **kwargs)\n\n return get_connection(alias)\n\n\n# Support old naming convention\n_get_connection = get_connection\n_get_db = get_db\n", "path": "mongoengine/connection.py"}]} | 3,718 | 227 |
gh_patches_debug_9231 | rasdani/github-patches | git_diff | privacyidea__privacyidea-2615 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
no serial in polling on /ttype/push
We may have a bug in push polling. This exception occurs on polling
https://gist.github.com/laclaro/743618d11f61f8a817e273db6b804a9a
This may be related to #2534.
</issue>
<code>
[start of privacyidea/api/ttype.py]
1 # -*- coding: utf-8 -*-
2 #
3 # http://www.privacyidea.org
4 # (c) Cornelius Kölbel, privacyidea.org
5 #
6 # 2015-09-01 Cornelius Kölbel, <[email protected]>
7 # Initial writeup
8 #
9 # This code is free software; you can redistribute it and/or
10 # modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE
11 # License as published by the Free Software Foundation; either
12 # version 3 of the License, or any later version.
13 #
14 # This code is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU AFFERO GENERAL PUBLIC LICENSE for more details.
18 #
19 # You should have received a copy of the GNU Affero General Public
20 # License along with this program. If not, see <http://www.gnu.org/licenses/>.
21 #
22 """
23 This API endpoint is a generic endpoint that can be used by any token
24 type.
25
26 The tokentype needs to implement a classmethod *api_endpoint* and can then be
27 called by /ttype/<tokentype>.
28 This way, each tokentype can create its own API without the need to change
29 the core API.
30
31 The TiQR Token uses this API to implement its special functionalities. See
32 :ref:`code_tiqr_token`.
33 """
34 from flask import (Blueprint,
35 request)
36 from .lib.utils import getParam
37 from ..lib.log import log_with
38 from flask import g, jsonify, current_app
39 import logging
40 from privacyidea.api.lib.utils import get_all_params
41 from privacyidea.lib.policy import PolicyClass
42 from privacyidea.lib.audit import getAudit
43 from privacyidea.lib.config import (get_token_class, get_from_config,
44 SYSCONF, ensure_no_config_object)
45 from privacyidea.lib.user import get_user_from_param
46 from privacyidea.lib.utils import get_client_ip
47 import json
48
49 log = logging.getLogger(__name__)
50
51 ttype_blueprint = Blueprint('ttype_blueprint', __name__)
52
53
54 @ttype_blueprint.before_request
55 def before_request():
56 """
57 This is executed before the request
58 """
59 ensure_no_config_object()
60 request.all_data = get_all_params(request.values, request.data)
61 privacyidea_server = current_app.config.get("PI_AUDIT_SERVERNAME") or \
62 request.host
63 # Create a policy_object, that reads the database audit settings
64 # and contains the complete policy definition during the request.
65 # This audit_object can be used in the postpolicy and prepolicy and it
66 # can be passed to the innerpolicies.
67 g.policy_object = PolicyClass()
68 g.audit_object = getAudit(current_app.config)
69 # access_route contains the ip adresses of all clients, hops and proxies.
70 g.client_ip = get_client_ip(request,
71 get_from_config(SYSCONF.OVERRIDECLIENT))
72 g.audit_object.log({"success": False,
73 "action_detail": "",
74 "client": g.client_ip,
75 "client_user_agent": request.user_agent.browser,
76 "privacyidea_server": privacyidea_server,
77 "action": "{0!s} {1!s}".format(request.method, request.url_rule),
78 "info": ""})
79
80
81 @ttype_blueprint.route('/<ttype>', methods=['POST', 'GET'])
82 @log_with(log)
83 def token(ttype=None):
84 """
85 This is a special token function. Each token type can define an
86 additional API call, that does not need authentication on the REST API
87 level.
88
89 :return: Token Type dependent
90 """
91 tokenc = get_token_class(ttype)
92 res = tokenc.api_endpoint(request, g)
93 serial = getParam(request.all_data, "serial")
94 user = get_user_from_param(request.all_data)
95 g.audit_object.log({"success": 1,
96 "user": user.login,
97 "realm": user.realm,
98 "serial": serial,
99 "token_type": ttype})
100 if res[0] == "json":
101 return jsonify(res[1])
102 elif res[0] in ["html", "plain"]:
103 return current_app.response_class(res[1], mimetype="text/{0!s}".format(res[0]))
104 elif len(res) == 2:
105 return current_app.response_class(json.dumps(res[1]),
106 mimetype="application/{0!s}".format(res[0]))
107 else:
108 return current_app.response_class(res[1], mimetype="application/octet-binary",
109 headers=res[2])
110
[end of privacyidea/api/ttype.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/privacyidea/api/ttype.py b/privacyidea/api/ttype.py
--- a/privacyidea/api/ttype.py
+++ b/privacyidea/api/ttype.py
@@ -69,6 +69,7 @@
# access_route contains the ip adresses of all clients, hops and proxies.
g.client_ip = get_client_ip(request,
get_from_config(SYSCONF.OVERRIDECLIENT))
+ g.serial = getParam(request.all_data, "serial") or None
g.audit_object.log({"success": False,
"action_detail": "",
"client": g.client_ip,
| {"golden_diff": "diff --git a/privacyidea/api/ttype.py b/privacyidea/api/ttype.py\n--- a/privacyidea/api/ttype.py\n+++ b/privacyidea/api/ttype.py\n@@ -69,6 +69,7 @@\n # access_route contains the ip adresses of all clients, hops and proxies.\n g.client_ip = get_client_ip(request,\n get_from_config(SYSCONF.OVERRIDECLIENT))\n+ g.serial = getParam(request.all_data, \"serial\") or None\n g.audit_object.log({\"success\": False,\n \"action_detail\": \"\",\n \"client\": g.client_ip,\n", "issue": "no serial in polling on /ttype/push\nWe may have a bug in push polling. This exception occurs on polling\r\n\r\nhttps://gist.github.com/laclaro/743618d11f61f8a817e273db6b804a9a\r\n\r\nThis may be related to #2534.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# http://www.privacyidea.org\n# (c) Cornelius K\u00f6lbel, privacyidea.org\n#\n# 2015-09-01 Cornelius K\u00f6lbel, <[email protected]>\n# Initial writeup\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\"\"\"\nThis API endpoint is a generic endpoint that can be used by any token\ntype.\n\nThe tokentype needs to implement a classmethod *api_endpoint* and can then be\ncalled by /ttype/<tokentype>.\nThis way, each tokentype can create its own API without the need to change\nthe core API.\n\nThe TiQR Token uses this API to implement its special functionalities. See\n:ref:`code_tiqr_token`.\n\"\"\"\nfrom flask import (Blueprint,\n request)\nfrom .lib.utils import getParam\nfrom ..lib.log import log_with\nfrom flask import g, jsonify, current_app\nimport logging\nfrom privacyidea.api.lib.utils import get_all_params\nfrom privacyidea.lib.policy import PolicyClass\nfrom privacyidea.lib.audit import getAudit\nfrom privacyidea.lib.config import (get_token_class, get_from_config,\n SYSCONF, ensure_no_config_object)\nfrom privacyidea.lib.user import get_user_from_param\nfrom privacyidea.lib.utils import get_client_ip\nimport json\n\nlog = logging.getLogger(__name__)\n\nttype_blueprint = Blueprint('ttype_blueprint', __name__)\n\n\n@ttype_blueprint.before_request\ndef before_request():\n \"\"\"\n This is executed before the request\n \"\"\"\n ensure_no_config_object()\n request.all_data = get_all_params(request.values, request.data)\n privacyidea_server = current_app.config.get(\"PI_AUDIT_SERVERNAME\") or \\\n request.host\n # Create a policy_object, that reads the database audit settings\n # and contains the complete policy definition during the request.\n # This audit_object can be used in the postpolicy and prepolicy and it\n # can be passed to the innerpolicies.\n g.policy_object = PolicyClass()\n g.audit_object = getAudit(current_app.config)\n # access_route contains the ip adresses of all clients, hops and proxies.\n g.client_ip = get_client_ip(request,\n get_from_config(SYSCONF.OVERRIDECLIENT))\n g.audit_object.log({\"success\": False,\n \"action_detail\": \"\",\n \"client\": g.client_ip,\n \"client_user_agent\": request.user_agent.browser,\n \"privacyidea_server\": privacyidea_server,\n \"action\": \"{0!s} {1!s}\".format(request.method, request.url_rule),\n \"info\": \"\"})\n\n\n@ttype_blueprint.route('/<ttype>', methods=['POST', 'GET'])\n@log_with(log)\ndef token(ttype=None):\n \"\"\"\n This is a special token function. Each token type can define an\n additional API call, that does not need authentication on the REST API\n level.\n\n :return: Token Type dependent\n \"\"\"\n tokenc = get_token_class(ttype)\n res = tokenc.api_endpoint(request, g)\n serial = getParam(request.all_data, \"serial\")\n user = get_user_from_param(request.all_data)\n g.audit_object.log({\"success\": 1,\n \"user\": user.login,\n \"realm\": user.realm,\n \"serial\": serial,\n \"token_type\": ttype})\n if res[0] == \"json\":\n return jsonify(res[1])\n elif res[0] in [\"html\", \"plain\"]:\n return current_app.response_class(res[1], mimetype=\"text/{0!s}\".format(res[0]))\n elif len(res) == 2:\n return current_app.response_class(json.dumps(res[1]),\n mimetype=\"application/{0!s}\".format(res[0]))\n else:\n return current_app.response_class(res[1], mimetype=\"application/octet-binary\",\n headers=res[2])\n", "path": "privacyidea/api/ttype.py"}]} | 1,811 | 134 |
gh_patches_debug_6576 | rasdani/github-patches | git_diff | ephios-dev__ephios-757 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make minors identifiable on event detail page
As an Einsatzleiter, I want to quickly grasp which participants are younger than 18 years. For that purpose, I want to have the participation boxes on the event detail page/shift box to display a small warning/indication, e.g. a red corner or similar.
</issue>
<code>
[start of ephios/core/signup/participants.py]
1 import dataclasses
2 import functools
3 from datetime import date
4 from typing import Optional
5
6 from django.contrib.auth import get_user_model
7 from django.db.models import QuerySet
8 from django.urls import reverse
9 from django.utils.safestring import mark_safe
10 from django.utils.translation import gettext_lazy as _
11
12 from ephios.core.models import AbstractParticipation, LocalParticipation, Qualification
13 from ephios.core.models.events import PlaceholderParticipation
14
15
16 @dataclasses.dataclass(frozen=True)
17 class AbstractParticipant:
18 first_name: str
19 last_name: str
20 qualifications: QuerySet = dataclasses.field(hash=False)
21 date_of_birth: Optional[date]
22 email: Optional[str] # if set to None, no notifications are sent
23
24 def get_age(self, today: date = None):
25 if self.date_of_birth is None:
26 return None
27 today, born = today or date.today(), self.date_of_birth
28 return today.year - born.year - ((today.month, today.day) < (born.month, born.day))
29
30 def __str__(self):
31 return f"{self.first_name} {self.last_name}"
32
33 def new_participation(self, shift):
34 raise NotImplementedError
35
36 def participation_for(self, shift):
37 """Return the participation object for a shift. Return None if it does not exist."""
38 raise NotImplementedError
39
40 def all_participations(self):
41 """Return all participations for this participant"""
42 raise NotImplementedError
43
44 @functools.lru_cache(maxsize=64)
45 def collect_all_qualifications(self) -> set:
46 return Qualification.collect_all_included_qualifications(self.qualifications)
47
48 def has_qualifications(self, qualifications):
49 return set(qualifications) <= self.collect_all_qualifications()
50
51 def reverse_signup_action(self, shift):
52 raise NotImplementedError
53
54 def reverse_event_detail(self, event):
55 raise NotImplementedError
56
57 @property
58 def icon(self):
59 return mark_safe('<span class="fa fa-user"></span>')
60
61
62 @dataclasses.dataclass(frozen=True)
63 class LocalUserParticipant(AbstractParticipant):
64 user: get_user_model()
65
66 def new_participation(self, shift):
67 return LocalParticipation(shift=shift, user=self.user)
68
69 def participation_for(self, shift):
70 try:
71 return LocalParticipation.objects.get(shift=shift, user=self.user)
72 except LocalParticipation.DoesNotExist:
73 return None
74
75 def all_participations(self):
76 return LocalParticipation.objects.filter(user=self.user)
77
78 def reverse_signup_action(self, shift):
79 return reverse("core:signup_action", kwargs=dict(pk=shift.pk))
80
81 def reverse_event_detail(self, event):
82 return event.get_absolute_url()
83
84
85 @dataclasses.dataclass(frozen=True)
86 class PlaceholderParticipant(AbstractParticipant):
87 def new_participation(self, shift):
88 return PlaceholderParticipation(
89 shift=shift, first_name=self.first_name, last_name=self.last_name
90 )
91
92 def participation_for(self, shift):
93 try:
94 return PlaceholderParticipation.objects.get(
95 shift=shift, first_name=self.first_name, last_name=self.last_name
96 )
97 except PlaceholderParticipation.DoesNotExist:
98 return None
99
100 def all_participations(self):
101 return AbstractParticipation.objects.none()
102
103 def reverse_signup_action(self, shift):
104 raise NotImplementedError
105
106 def reverse_event_detail(self, event):
107 raise NotImplementedError
108
109 @property
110 def icon(self):
111 return mark_safe(
112 f'<span class="fa fa-user-tag" data-toggle="tooltip" data-placement="left" title="{_("Placeholder")}"></span>'
113 )
114
[end of ephios/core/signup/participants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ephios/core/signup/participants.py b/ephios/core/signup/participants.py
--- a/ephios/core/signup/participants.py
+++ b/ephios/core/signup/participants.py
@@ -27,6 +27,12 @@
today, born = today or date.today(), self.date_of_birth
return today.year - born.year - ((today.month, today.day) < (born.month, born.day))
+ @property
+ def is_minor(self):
+ if age := self.get_age():
+ return age < 18
+ return False
+
def __str__(self):
return f"{self.first_name} {self.last_name}"
| {"golden_diff": "diff --git a/ephios/core/signup/participants.py b/ephios/core/signup/participants.py\n--- a/ephios/core/signup/participants.py\n+++ b/ephios/core/signup/participants.py\n@@ -27,6 +27,12 @@\n today, born = today or date.today(), self.date_of_birth\n return today.year - born.year - ((today.month, today.day) < (born.month, born.day))\n \n+ @property\n+ def is_minor(self):\n+ if age := self.get_age():\n+ return age < 18\n+ return False\n+\n def __str__(self):\n return f\"{self.first_name} {self.last_name}\"\n", "issue": "Make minors identifiable on event detail page\nAs an Einsatzleiter, I want to quickly grasp which participants are younger than 18 years. For that purpose, I want to have the participation boxes on the event detail page/shift box to display a small warning/indication, e.g. a red corner or similar.\n", "before_files": [{"content": "import dataclasses\nimport functools\nfrom datetime import date\nfrom typing import Optional\n\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import QuerySet\nfrom django.urls import reverse\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\nfrom ephios.core.models import AbstractParticipation, LocalParticipation, Qualification\nfrom ephios.core.models.events import PlaceholderParticipation\n\n\[email protected](frozen=True)\nclass AbstractParticipant:\n first_name: str\n last_name: str\n qualifications: QuerySet = dataclasses.field(hash=False)\n date_of_birth: Optional[date]\n email: Optional[str] # if set to None, no notifications are sent\n\n def get_age(self, today: date = None):\n if self.date_of_birth is None:\n return None\n today, born = today or date.today(), self.date_of_birth\n return today.year - born.year - ((today.month, today.day) < (born.month, born.day))\n\n def __str__(self):\n return f\"{self.first_name} {self.last_name}\"\n\n def new_participation(self, shift):\n raise NotImplementedError\n\n def participation_for(self, shift):\n \"\"\"Return the participation object for a shift. Return None if it does not exist.\"\"\"\n raise NotImplementedError\n\n def all_participations(self):\n \"\"\"Return all participations for this participant\"\"\"\n raise NotImplementedError\n\n @functools.lru_cache(maxsize=64)\n def collect_all_qualifications(self) -> set:\n return Qualification.collect_all_included_qualifications(self.qualifications)\n\n def has_qualifications(self, qualifications):\n return set(qualifications) <= self.collect_all_qualifications()\n\n def reverse_signup_action(self, shift):\n raise NotImplementedError\n\n def reverse_event_detail(self, event):\n raise NotImplementedError\n\n @property\n def icon(self):\n return mark_safe('<span class=\"fa fa-user\"></span>')\n\n\[email protected](frozen=True)\nclass LocalUserParticipant(AbstractParticipant):\n user: get_user_model()\n\n def new_participation(self, shift):\n return LocalParticipation(shift=shift, user=self.user)\n\n def participation_for(self, shift):\n try:\n return LocalParticipation.objects.get(shift=shift, user=self.user)\n except LocalParticipation.DoesNotExist:\n return None\n\n def all_participations(self):\n return LocalParticipation.objects.filter(user=self.user)\n\n def reverse_signup_action(self, shift):\n return reverse(\"core:signup_action\", kwargs=dict(pk=shift.pk))\n\n def reverse_event_detail(self, event):\n return event.get_absolute_url()\n\n\[email protected](frozen=True)\nclass PlaceholderParticipant(AbstractParticipant):\n def new_participation(self, shift):\n return PlaceholderParticipation(\n shift=shift, first_name=self.first_name, last_name=self.last_name\n )\n\n def participation_for(self, shift):\n try:\n return PlaceholderParticipation.objects.get(\n shift=shift, first_name=self.first_name, last_name=self.last_name\n )\n except PlaceholderParticipation.DoesNotExist:\n return None\n\n def all_participations(self):\n return AbstractParticipation.objects.none()\n\n def reverse_signup_action(self, shift):\n raise NotImplementedError\n\n def reverse_event_detail(self, event):\n raise NotImplementedError\n\n @property\n def icon(self):\n return mark_safe(\n f'<span class=\"fa fa-user-tag\" data-toggle=\"tooltip\" data-placement=\"left\" title=\"{_(\"Placeholder\")}\"></span>'\n )\n", "path": "ephios/core/signup/participants.py"}]} | 1,607 | 152 |
gh_patches_debug_37692 | rasdani/github-patches | git_diff | astronomer__astro-sdk-325 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Astro Build's Integration Test breaking on 0.8.1
broken on - 0.8.1 , but works with 0.7.0.
In this test dag, task_5 joins tables from task_3 (snowflake) and task_4 (postgres). The task’s print statement shows the joined table, suggesting successful ingestion and transformation. However, the error below suggests the the postgres output conn is expected to contain the database element of the snowflake connection.
In other words, the task fails becausepostgres_conn doesn’t have the database attribute associated with snowflake_conn.
```
import time
from datetime import datetime
import pandas as pd
from airflow.decorators import dag, task
from airflow.models import Variable
from airflow.utils import timezone
from airflow.utils.dates import days_ago
from astro import dataframe as df
from astro import sql as aql
from astro.sql.table import Table, TempTable
@df()
def task_1_func():
return pd.DataFrame({'a':[1,2,3]})
@aql.transform(conn_id='postgres_conn')
def task_2_func(execution_date: Table):
return """SELECT * FROM actor WHERE startdate < '{{ execution_date }}'"""
@aql.transform(conn_id='snowflake_conn')
def task_3_func():
return """SELECT * FROM "ASTROBUILD"."BUILDSCHEMA"."MYTABLE" LIMIT 10;"""
@aql.transform(conn_id='postgres_conn')
def task_4_func():
return """SELECT * FROM actor LIMIT 10;"""
@df(conn_id='postgres_conn')
def task_5_func(task_3: pd.DataFrame, task_4: pd.DataFrame):
df=task_3.join(task_4)
print(df)
return df
@dag(schedule_interval='0 0 * * *', start_date=datetime(2022, 4, 15, 11, 28, 8), catchup=False, tags=['tag_1', 'tag_1'])
def dag_1():
task_1 = task_1_func()
task_2 = task_2_func(output_table=Table(conn_id='postgres_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_2'), execution_date=Table(conn_id='postgres_conn', table_name='execution_date'))
task_3 = task_3_func(output_table=Table(conn_id='snowflake_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_3'))
task_4 = task_4_func(output_table=Table(conn_id='postgres_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_4'))
task_5 = task_5_func(task_3, task_4, output_table=Table(conn_id='postgres_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_5'))
dag_obj = dag_1()
```
Error:
```
INFO - Using connection to: id: postgres_conn. Host: 127.0.0.1, Port: 8999, Schema: postgres, Login: postgres, Password: ***, extra: {}
*** psycopg2.OperationalError: connection to server at "127.0.0.1", port 8999 failed: FATAL: database "ASTROBUILD" does not exist
```
</issue>
<code>
[start of src/astro/utils/table_handler.py]
1 import inspect
2 from typing import Optional
3
4 import pandas
5
6 from astro.settings import SCHEMA
7 from astro.sql.table import Table
8
9
10 class TableHandler:
11 def _set_variables_from_first_table(self):
12 """
13 When we create our SQL operation, we run with the assumption that the first table given is the "main table".
14 This means that a user doesn't need to define default conn_id, database, etc. in the function unless they want
15 to create default values.
16 """
17 first_table: Optional[Table] = None
18 if self.op_args:
19 table_index = [x for x, t in enumerate(self.op_args) if type(t) == Table]
20 if table_index:
21 first_table = self.op_args[table_index[0]]
22 elif not first_table:
23 table_kwargs = [
24 x
25 for x in inspect.signature(self.python_callable).parameters.values()
26 if (
27 x.annotation == Table
28 and type(self.op_kwargs[x.name]) == Table
29 or x.annotation == pandas.DataFrame
30 and type(self.op_kwargs[x.name]) == Table
31 )
32 ]
33 if table_kwargs:
34 first_table = self.op_kwargs[table_kwargs[0].name]
35
36 # If there is no first table via op_ags or kwargs, we check the parameters
37 elif not first_table:
38 if self.parameters:
39 param_tables = [t for t in self.parameters.values() if type(t) == Table]
40 if param_tables:
41 first_table = param_tables[0]
42
43 if first_table:
44 self.conn_id = first_table.conn_id or self.conn_id
45 self.database = first_table.database or self.database
46 self.schema = first_table.schema or self.schema
47 self.warehouse = first_table.warehouse or self.warehouse
48 self.role = first_table.role or self.role
49
50 def populate_output_table(self):
51 self.output_table.conn_id = self.output_table.conn_id or self.conn_id
52 self.output_table.database = self.output_table.database or self.database
53 self.output_table.warehouse = self.output_table.warehouse or self.warehouse
54 self.output_table.schema = self.output_table.schema or SCHEMA
55
[end of src/astro/utils/table_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/astro/utils/table_handler.py b/src/astro/utils/table_handler.py
--- a/src/astro/utils/table_handler.py
+++ b/src/astro/utils/table_handler.py
@@ -16,28 +16,51 @@
"""
first_table: Optional[Table] = None
if self.op_args:
- table_index = [x for x, t in enumerate(self.op_args) if type(t) == Table]
- if table_index:
+ table_index = [
+ x for x, t in enumerate(self.op_args) if isinstance(t, Table)
+ ]
+ conn_id_set = {x.conn_id for x in self.op_args if isinstance(x, Table)}
+ # Check to see if all tables belong to same conn_id. Otherwise, we this can go wrong for cases
+ # 1. When we have tables from different DBs.
+ # 2. When we have tables from different conn_id, since they can be configured with different
+ # database/schema etc.
+ if table_index and len(conn_id_set) == 1:
first_table = self.op_args[table_index[0]]
- elif not first_table:
+
+ if not first_table and self.op_kwargs and self.python_callable:
table_kwargs = [
x
for x in inspect.signature(self.python_callable).parameters.values()
if (
x.annotation == Table
- and type(self.op_kwargs[x.name]) == Table
+ and isinstance(self.op_kwargs[x.name], Table)
or x.annotation == pandas.DataFrame
- and type(self.op_kwargs[x.name]) == Table
+ and isinstance(self.op_kwargs[x.name], Table)
)
]
- if table_kwargs:
+ conn_id_set = {
+ self.op_kwargs[x.name].conn_id
+ for x in inspect.signature(self.python_callable).parameters.values()
+ if (
+ x.annotation == Table
+ and isinstance(self.op_kwargs[x.name], Table)
+ or x.annotation == pandas.DataFrame
+ and isinstance(self.op_kwargs[x.name], Table)
+ )
+ }
+ if table_kwargs and len(conn_id_set) == 1:
first_table = self.op_kwargs[table_kwargs[0].name]
# If there is no first table via op_ags or kwargs, we check the parameters
- elif not first_table:
+ if not first_table and self.parameters:
if self.parameters:
- param_tables = [t for t in self.parameters.values() if type(t) == Table]
- if param_tables:
+ param_tables = [
+ t for t in self.parameters.values() if isinstance(t, Table)
+ ]
+ conn_id_set = {
+ t.conn_id for t in self.parameters.values() if isinstance(t, Table)
+ }
+ if param_tables and len(conn_id_set) == 1:
first_table = param_tables[0]
if first_table:
| {"golden_diff": "diff --git a/src/astro/utils/table_handler.py b/src/astro/utils/table_handler.py\n--- a/src/astro/utils/table_handler.py\n+++ b/src/astro/utils/table_handler.py\n@@ -16,28 +16,51 @@\n \"\"\"\n first_table: Optional[Table] = None\n if self.op_args:\n- table_index = [x for x, t in enumerate(self.op_args) if type(t) == Table]\n- if table_index:\n+ table_index = [\n+ x for x, t in enumerate(self.op_args) if isinstance(t, Table)\n+ ]\n+ conn_id_set = {x.conn_id for x in self.op_args if isinstance(x, Table)}\n+ # Check to see if all tables belong to same conn_id. Otherwise, we this can go wrong for cases\n+ # 1. When we have tables from different DBs.\n+ # 2. When we have tables from different conn_id, since they can be configured with different\n+ # database/schema etc.\n+ if table_index and len(conn_id_set) == 1:\n first_table = self.op_args[table_index[0]]\n- elif not first_table:\n+\n+ if not first_table and self.op_kwargs and self.python_callable:\n table_kwargs = [\n x\n for x in inspect.signature(self.python_callable).parameters.values()\n if (\n x.annotation == Table\n- and type(self.op_kwargs[x.name]) == Table\n+ and isinstance(self.op_kwargs[x.name], Table)\n or x.annotation == pandas.DataFrame\n- and type(self.op_kwargs[x.name]) == Table\n+ and isinstance(self.op_kwargs[x.name], Table)\n )\n ]\n- if table_kwargs:\n+ conn_id_set = {\n+ self.op_kwargs[x.name].conn_id\n+ for x in inspect.signature(self.python_callable).parameters.values()\n+ if (\n+ x.annotation == Table\n+ and isinstance(self.op_kwargs[x.name], Table)\n+ or x.annotation == pandas.DataFrame\n+ and isinstance(self.op_kwargs[x.name], Table)\n+ )\n+ }\n+ if table_kwargs and len(conn_id_set) == 1:\n first_table = self.op_kwargs[table_kwargs[0].name]\n \n # If there is no first table via op_ags or kwargs, we check the parameters\n- elif not first_table:\n+ if not first_table and self.parameters:\n if self.parameters:\n- param_tables = [t for t in self.parameters.values() if type(t) == Table]\n- if param_tables:\n+ param_tables = [\n+ t for t in self.parameters.values() if isinstance(t, Table)\n+ ]\n+ conn_id_set = {\n+ t.conn_id for t in self.parameters.values() if isinstance(t, Table)\n+ }\n+ if param_tables and len(conn_id_set) == 1:\n first_table = param_tables[0]\n \n if first_table:\n", "issue": "Astro Build's Integration Test breaking on 0.8.1\nbroken on - 0.8.1 , but works with 0.7.0.\n\nIn this test dag, task_5 joins tables from task_3 (snowflake) and task_4 (postgres). The task\u2019s print statement shows the joined table, suggesting successful ingestion and transformation. However, the error below suggests the the postgres output conn is expected to contain the database element of the snowflake connection.\nIn other words, the task fails becausepostgres_conn doesn\u2019t have the database attribute associated with snowflake_conn.\n\n```\nimport time\nfrom datetime import datetime\n\nimport pandas as pd\nfrom airflow.decorators import dag, task\nfrom airflow.models import Variable\nfrom airflow.utils import timezone\nfrom airflow.utils.dates import days_ago\nfrom astro import dataframe as df\nfrom astro import sql as aql\nfrom astro.sql.table import Table, TempTable\n\n@df()\ndef task_1_func():\n return pd.DataFrame({'a':[1,2,3]})\n\[email protected](conn_id='postgres_conn')\ndef task_2_func(execution_date: Table):\n return \"\"\"SELECT * FROM actor WHERE startdate < '{{ execution_date }}'\"\"\"\n\[email protected](conn_id='snowflake_conn')\ndef task_3_func():\n return \"\"\"SELECT * FROM \"ASTROBUILD\".\"BUILDSCHEMA\".\"MYTABLE\" LIMIT 10;\"\"\"\n\[email protected](conn_id='postgres_conn')\ndef task_4_func():\n return \"\"\"SELECT * FROM actor LIMIT 10;\"\"\"\n\n@df(conn_id='postgres_conn')\ndef task_5_func(task_3: pd.DataFrame, task_4: pd.DataFrame):\n df=task_3.join(task_4)\n print(df)\n return df\n\n@dag(schedule_interval='0 0 * * *', start_date=datetime(2022, 4, 15, 11, 28, 8), catchup=False, tags=['tag_1', 'tag_1'])\ndef dag_1():\n task_1 = task_1_func()\n task_2 = task_2_func(output_table=Table(conn_id='postgres_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_2'), execution_date=Table(conn_id='postgres_conn', table_name='execution_date'))\n task_3 = task_3_func(output_table=Table(conn_id='snowflake_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_3'))\n task_4 = task_4_func(output_table=Table(conn_id='postgres_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_4'))\n task_5 = task_5_func(task_3, task_4, output_table=Table(conn_id='postgres_conn', schema='tmp_astro', table_name='tmp_astro_dag_1_task_5'))\n\ndag_obj = dag_1()\n```\n\nError:\n\n```\nINFO - Using connection to: id: postgres_conn. Host: 127.0.0.1, Port: 8999, Schema: postgres, Login: postgres, Password: ***, extra: {}\n*** psycopg2.OperationalError: connection to server at \"127.0.0.1\", port 8999 failed: FATAL: database \"ASTROBUILD\" does not exist\n```\n", "before_files": [{"content": "import inspect\nfrom typing import Optional\n\nimport pandas\n\nfrom astro.settings import SCHEMA\nfrom astro.sql.table import Table\n\n\nclass TableHandler:\n def _set_variables_from_first_table(self):\n \"\"\"\n When we create our SQL operation, we run with the assumption that the first table given is the \"main table\".\n This means that a user doesn't need to define default conn_id, database, etc. in the function unless they want\n to create default values.\n \"\"\"\n first_table: Optional[Table] = None\n if self.op_args:\n table_index = [x for x, t in enumerate(self.op_args) if type(t) == Table]\n if table_index:\n first_table = self.op_args[table_index[0]]\n elif not first_table:\n table_kwargs = [\n x\n for x in inspect.signature(self.python_callable).parameters.values()\n if (\n x.annotation == Table\n and type(self.op_kwargs[x.name]) == Table\n or x.annotation == pandas.DataFrame\n and type(self.op_kwargs[x.name]) == Table\n )\n ]\n if table_kwargs:\n first_table = self.op_kwargs[table_kwargs[0].name]\n\n # If there is no first table via op_ags or kwargs, we check the parameters\n elif not first_table:\n if self.parameters:\n param_tables = [t for t in self.parameters.values() if type(t) == Table]\n if param_tables:\n first_table = param_tables[0]\n\n if first_table:\n self.conn_id = first_table.conn_id or self.conn_id\n self.database = first_table.database or self.database\n self.schema = first_table.schema or self.schema\n self.warehouse = first_table.warehouse or self.warehouse\n self.role = first_table.role or self.role\n\n def populate_output_table(self):\n self.output_table.conn_id = self.output_table.conn_id or self.conn_id\n self.output_table.database = self.output_table.database or self.database\n self.output_table.warehouse = self.output_table.warehouse or self.warehouse\n self.output_table.schema = self.output_table.schema or SCHEMA\n", "path": "src/astro/utils/table_handler.py"}]} | 1,817 | 635 |
gh_patches_debug_14014 | rasdani/github-patches | git_diff | sanic-org__sanic-1857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nightly build fails due to websockets version not matching setup.py
on setup.py: >=0.7.0,<0.9
on tox.ini: >=0.7.0,<0.8
</issue>
<code>
[start of sanic/websocket.py]
1 from typing import (
2 Any,
3 Awaitable,
4 Callable,
5 Dict,
6 MutableMapping,
7 Optional,
8 Union,
9 )
10
11 from httptools import HttpParserUpgrade # type: ignore
12 from websockets import ( # type: ignore
13 ConnectionClosed,
14 InvalidHandshake,
15 WebSocketCommonProtocol,
16 handshake,
17 )
18
19 from sanic.exceptions import InvalidUsage
20 from sanic.server import HttpProtocol
21
22
23 __all__ = ["ConnectionClosed", "WebSocketProtocol", "WebSocketConnection"]
24
25 ASIMessage = MutableMapping[str, Any]
26
27
28 class WebSocketProtocol(HttpProtocol):
29 def __init__(
30 self,
31 *args,
32 websocket_timeout=10,
33 websocket_max_size=None,
34 websocket_max_queue=None,
35 websocket_read_limit=2 ** 16,
36 websocket_write_limit=2 ** 16,
37 **kwargs
38 ):
39 super().__init__(*args, **kwargs)
40 self.websocket = None
41 # self.app = None
42 self.websocket_timeout = websocket_timeout
43 self.websocket_max_size = websocket_max_size
44 self.websocket_max_queue = websocket_max_queue
45 self.websocket_read_limit = websocket_read_limit
46 self.websocket_write_limit = websocket_write_limit
47
48 # timeouts make no sense for websocket routes
49 def request_timeout_callback(self):
50 if self.websocket is None:
51 super().request_timeout_callback()
52
53 def response_timeout_callback(self):
54 if self.websocket is None:
55 super().response_timeout_callback()
56
57 def keep_alive_timeout_callback(self):
58 if self.websocket is None:
59 super().keep_alive_timeout_callback()
60
61 def connection_lost(self, exc):
62 if self.websocket is not None:
63 self.websocket.connection_lost(exc)
64 super().connection_lost(exc)
65
66 def data_received(self, data):
67 if self.websocket is not None:
68 # pass the data to the websocket protocol
69 self.websocket.data_received(data)
70 else:
71 try:
72 super().data_received(data)
73 except HttpParserUpgrade:
74 # this is okay, it just indicates we've got an upgrade request
75 pass
76
77 def write_response(self, response):
78 if self.websocket is not None:
79 # websocket requests do not write a response
80 self.transport.close()
81 else:
82 super().write_response(response)
83
84 async def websocket_handshake(self, request, subprotocols=None):
85 # let the websockets package do the handshake with the client
86 headers = {}
87
88 try:
89 key = handshake.check_request(request.headers)
90 handshake.build_response(headers, key)
91 except InvalidHandshake:
92 raise InvalidUsage("Invalid websocket request")
93
94 subprotocol = None
95 if subprotocols and "Sec-Websocket-Protocol" in request.headers:
96 # select a subprotocol
97 client_subprotocols = [
98 p.strip()
99 for p in request.headers["Sec-Websocket-Protocol"].split(",")
100 ]
101 for p in client_subprotocols:
102 if p in subprotocols:
103 subprotocol = p
104 headers["Sec-Websocket-Protocol"] = subprotocol
105 break
106
107 # write the 101 response back to the client
108 rv = b"HTTP/1.1 101 Switching Protocols\r\n"
109 for k, v in headers.items():
110 rv += k.encode("utf-8") + b": " + v.encode("utf-8") + b"\r\n"
111 rv += b"\r\n"
112 request.transport.write(rv)
113
114 # hook up the websocket protocol
115 self.websocket = WebSocketCommonProtocol(
116 timeout=self.websocket_timeout,
117 max_size=self.websocket_max_size,
118 max_queue=self.websocket_max_queue,
119 read_limit=self.websocket_read_limit,
120 write_limit=self.websocket_write_limit,
121 )
122 # Following two lines are required for websockets 8.x
123 self.websocket.is_client = False
124 self.websocket.side = "server"
125 self.websocket.subprotocol = subprotocol
126 self.websocket.connection_made(request.transport)
127 self.websocket.connection_open()
128 return self.websocket
129
130
131 class WebSocketConnection:
132
133 # TODO
134 # - Implement ping/pong
135
136 def __init__(
137 self,
138 send: Callable[[ASIMessage], Awaitable[None]],
139 receive: Callable[[], Awaitable[ASIMessage]],
140 ) -> None:
141 self._send = send
142 self._receive = receive
143
144 async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:
145 message: Dict[str, Union[str, bytes]] = {"type": "websocket.send"}
146
147 if isinstance(data, bytes):
148 message.update({"bytes": data})
149 else:
150 message.update({"text": str(data)})
151
152 await self._send(message)
153
154 async def recv(self, *args, **kwargs) -> Optional[str]:
155 message = await self._receive()
156
157 if message["type"] == "websocket.receive":
158 return message["text"]
159 elif message["type"] == "websocket.disconnect":
160 pass
161
162 return None
163
164 receive = recv
165
166 async def accept(self) -> None:
167 await self._send({"type": "websocket.accept", "subprotocol": ""})
168
169 async def close(self) -> None:
170 pass
171
[end of sanic/websocket.py]
[start of setup.py]
1 """
2 Sanic
3 """
4 import codecs
5 import os
6 import re
7 import sys
8 from distutils.util import strtobool
9
10 from setuptools import setup
11 from setuptools.command.test import test as TestCommand
12
13
14 class PyTest(TestCommand):
15 """
16 Provide a Test runner to be used from setup.py to run unit tests
17 """
18
19 user_options = [("pytest-args=", "a", "Arguments to pass to pytest")]
20
21 def initialize_options(self):
22 TestCommand.initialize_options(self)
23 self.pytest_args = ""
24
25 def run_tests(self):
26 import shlex
27 import pytest
28
29 errno = pytest.main(shlex.split(self.pytest_args))
30 sys.exit(errno)
31
32
33 def open_local(paths, mode="r", encoding="utf8"):
34 path = os.path.join(os.path.abspath(os.path.dirname(__file__)), *paths)
35
36 return codecs.open(path, mode, encoding)
37
38
39 with open_local(["sanic", "__version__.py"], encoding="latin1") as fp:
40 try:
41 version = re.findall(r"^__version__ = \"([^']+)\"\r?$", fp.read(), re.M)[0]
42 except IndexError:
43 raise RuntimeError("Unable to determine version.")
44
45 with open_local(["README.rst"]) as rm:
46 long_description = rm.read()
47
48 setup_kwargs = {
49 "name": "sanic",
50 "version": version,
51 "url": "http://github.com/huge-success/sanic/",
52 "license": "MIT",
53 "author": "Sanic Community",
54 "author_email": "[email protected]",
55 "description": (
56 "A web server and web framework that's written to go fast. Build fast. Run fast."
57 ),
58 "long_description": long_description,
59 "packages": ["sanic"],
60 "platforms": "any",
61 "python_requires": ">=3.6",
62 "classifiers": [
63 "Development Status :: 4 - Beta",
64 "Environment :: Web Environment",
65 "License :: OSI Approved :: MIT License",
66 "Programming Language :: Python :: 3.6",
67 "Programming Language :: Python :: 3.7",
68 "Programming Language :: Python :: 3.8",
69 ],
70 "entry_points": {"console_scripts": ["sanic = sanic.__main__:main"]},
71 }
72
73 env_dependency = '; sys_platform != "win32" ' 'and implementation_name == "cpython"'
74 ujson = "ujson>=1.35" + env_dependency
75 uvloop = "uvloop>=0.5.3" + env_dependency
76
77 requirements = [
78 "httptools>=0.0.10",
79 uvloop,
80 ujson,
81 "aiofiles>=0.3.0",
82 "websockets>=7.0,<9.0",
83 "multidict>=4.0,<5.0",
84 "httpx==0.11.1",
85 ]
86
87 tests_require = [
88 "pytest==5.2.1",
89 "multidict>=4.0,<5.0",
90 "gunicorn",
91 "pytest-cov",
92 "httpcore==0.3.0",
93 "beautifulsoup4",
94 uvloop,
95 ujson,
96 "pytest-sanic",
97 "pytest-sugar",
98 "pytest-benchmark",
99 ]
100
101 docs_require = [
102 "sphinx>=2.1.2",
103 "sphinx_rtd_theme",
104 "recommonmark>=0.5.0",
105 "docutils",
106 "pygments",
107 ]
108
109 dev_require = tests_require + [
110 "aiofiles",
111 "tox",
112 "black",
113 "flake8",
114 "bandit",
115 "towncrier",
116 ]
117
118 all_require = dev_require + docs_require
119
120 if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
121 print("Installing without uJSON")
122 requirements.remove(ujson)
123 tests_require.remove(ujson)
124
125 # 'nt' means windows OS
126 if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")):
127 print("Installing without uvLoop")
128 requirements.remove(uvloop)
129 tests_require.remove(uvloop)
130
131 extras_require = {
132 "test": tests_require,
133 "dev": dev_require,
134 "docs": docs_require,
135 "all": all_require,
136 }
137
138 setup_kwargs["install_requires"] = requirements
139 setup_kwargs["tests_require"] = tests_require
140 setup_kwargs["extras_require"] = extras_require
141 setup_kwargs["cmdclass"] = {"test": PyTest}
142 setup(**setup_kwargs)
143
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sanic/websocket.py b/sanic/websocket.py
--- a/sanic/websocket.py
+++ b/sanic/websocket.py
@@ -113,7 +113,7 @@
# hook up the websocket protocol
self.websocket = WebSocketCommonProtocol(
- timeout=self.websocket_timeout,
+ close_timeout=self.websocket_timeout,
max_size=self.websocket_max_size,
max_queue=self.websocket_max_queue,
read_limit=self.websocket_read_limit,
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -79,7 +79,7 @@
uvloop,
ujson,
"aiofiles>=0.3.0",
- "websockets>=7.0,<9.0",
+ "websockets>=8.1,<9.0",
"multidict>=4.0,<5.0",
"httpx==0.11.1",
]
| {"golden_diff": "diff --git a/sanic/websocket.py b/sanic/websocket.py\n--- a/sanic/websocket.py\n+++ b/sanic/websocket.py\n@@ -113,7 +113,7 @@\n \n # hook up the websocket protocol\n self.websocket = WebSocketCommonProtocol(\n- timeout=self.websocket_timeout,\n+ close_timeout=self.websocket_timeout,\n max_size=self.websocket_max_size,\n max_queue=self.websocket_max_queue,\n read_limit=self.websocket_read_limit,\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -79,7 +79,7 @@\n uvloop,\n ujson,\n \"aiofiles>=0.3.0\",\n- \"websockets>=7.0,<9.0\",\n+ \"websockets>=8.1,<9.0\",\n \"multidict>=4.0,<5.0\",\n \"httpx==0.11.1\",\n ]\n", "issue": "Nightly build fails due to websockets version not matching setup.py\non setup.py: >=0.7.0,<0.9\r\non tox.ini: >=0.7.0,<0.8\n", "before_files": [{"content": "from typing import (\n Any,\n Awaitable,\n Callable,\n Dict,\n MutableMapping,\n Optional,\n Union,\n)\n\nfrom httptools import HttpParserUpgrade # type: ignore\nfrom websockets import ( # type: ignore\n ConnectionClosed,\n InvalidHandshake,\n WebSocketCommonProtocol,\n handshake,\n)\n\nfrom sanic.exceptions import InvalidUsage\nfrom sanic.server import HttpProtocol\n\n\n__all__ = [\"ConnectionClosed\", \"WebSocketProtocol\", \"WebSocketConnection\"]\n\nASIMessage = MutableMapping[str, Any]\n\n\nclass WebSocketProtocol(HttpProtocol):\n def __init__(\n self,\n *args,\n websocket_timeout=10,\n websocket_max_size=None,\n websocket_max_queue=None,\n websocket_read_limit=2 ** 16,\n websocket_write_limit=2 ** 16,\n **kwargs\n ):\n super().__init__(*args, **kwargs)\n self.websocket = None\n # self.app = None\n self.websocket_timeout = websocket_timeout\n self.websocket_max_size = websocket_max_size\n self.websocket_max_queue = websocket_max_queue\n self.websocket_read_limit = websocket_read_limit\n self.websocket_write_limit = websocket_write_limit\n\n # timeouts make no sense for websocket routes\n def request_timeout_callback(self):\n if self.websocket is None:\n super().request_timeout_callback()\n\n def response_timeout_callback(self):\n if self.websocket is None:\n super().response_timeout_callback()\n\n def keep_alive_timeout_callback(self):\n if self.websocket is None:\n super().keep_alive_timeout_callback()\n\n def connection_lost(self, exc):\n if self.websocket is not None:\n self.websocket.connection_lost(exc)\n super().connection_lost(exc)\n\n def data_received(self, data):\n if self.websocket is not None:\n # pass the data to the websocket protocol\n self.websocket.data_received(data)\n else:\n try:\n super().data_received(data)\n except HttpParserUpgrade:\n # this is okay, it just indicates we've got an upgrade request\n pass\n\n def write_response(self, response):\n if self.websocket is not None:\n # websocket requests do not write a response\n self.transport.close()\n else:\n super().write_response(response)\n\n async def websocket_handshake(self, request, subprotocols=None):\n # let the websockets package do the handshake with the client\n headers = {}\n\n try:\n key = handshake.check_request(request.headers)\n handshake.build_response(headers, key)\n except InvalidHandshake:\n raise InvalidUsage(\"Invalid websocket request\")\n\n subprotocol = None\n if subprotocols and \"Sec-Websocket-Protocol\" in request.headers:\n # select a subprotocol\n client_subprotocols = [\n p.strip()\n for p in request.headers[\"Sec-Websocket-Protocol\"].split(\",\")\n ]\n for p in client_subprotocols:\n if p in subprotocols:\n subprotocol = p\n headers[\"Sec-Websocket-Protocol\"] = subprotocol\n break\n\n # write the 101 response back to the client\n rv = b\"HTTP/1.1 101 Switching Protocols\\r\\n\"\n for k, v in headers.items():\n rv += k.encode(\"utf-8\") + b\": \" + v.encode(\"utf-8\") + b\"\\r\\n\"\n rv += b\"\\r\\n\"\n request.transport.write(rv)\n\n # hook up the websocket protocol\n self.websocket = WebSocketCommonProtocol(\n timeout=self.websocket_timeout,\n max_size=self.websocket_max_size,\n max_queue=self.websocket_max_queue,\n read_limit=self.websocket_read_limit,\n write_limit=self.websocket_write_limit,\n )\n # Following two lines are required for websockets 8.x\n self.websocket.is_client = False\n self.websocket.side = \"server\"\n self.websocket.subprotocol = subprotocol\n self.websocket.connection_made(request.transport)\n self.websocket.connection_open()\n return self.websocket\n\n\nclass WebSocketConnection:\n\n # TODO\n # - Implement ping/pong\n\n def __init__(\n self,\n send: Callable[[ASIMessage], Awaitable[None]],\n receive: Callable[[], Awaitable[ASIMessage]],\n ) -> None:\n self._send = send\n self._receive = receive\n\n async def send(self, data: Union[str, bytes], *args, **kwargs) -> None:\n message: Dict[str, Union[str, bytes]] = {\"type\": \"websocket.send\"}\n\n if isinstance(data, bytes):\n message.update({\"bytes\": data})\n else:\n message.update({\"text\": str(data)})\n\n await self._send(message)\n\n async def recv(self, *args, **kwargs) -> Optional[str]:\n message = await self._receive()\n\n if message[\"type\"] == \"websocket.receive\":\n return message[\"text\"]\n elif message[\"type\"] == \"websocket.disconnect\":\n pass\n\n return None\n\n receive = recv\n\n async def accept(self) -> None:\n await self._send({\"type\": \"websocket.accept\", \"subprotocol\": \"\"})\n\n async def close(self) -> None:\n pass\n", "path": "sanic/websocket.py"}, {"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nimport sys\nfrom distutils.util import strtobool\n\nfrom setuptools import setup\nfrom setuptools.command.test import test as TestCommand\n\n\nclass PyTest(TestCommand):\n \"\"\"\n Provide a Test runner to be used from setup.py to run unit tests\n \"\"\"\n\n user_options = [(\"pytest-args=\", \"a\", \"Arguments to pass to pytest\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.pytest_args = \"\"\n\n def run_tests(self):\n import shlex\n import pytest\n\n errno = pytest.main(shlex.split(self.pytest_args))\n sys.exit(errno)\n\n\ndef open_local(paths, mode=\"r\", encoding=\"utf8\"):\n path = os.path.join(os.path.abspath(os.path.dirname(__file__)), *paths)\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local([\"sanic\", \"__version__.py\"], encoding=\"latin1\") as fp:\n try:\n version = re.findall(r\"^__version__ = \\\"([^']+)\\\"\\r?$\", fp.read(), re.M)[0]\n except IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\nwith open_local([\"README.rst\"]) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n \"name\": \"sanic\",\n \"version\": version,\n \"url\": \"http://github.com/huge-success/sanic/\",\n \"license\": \"MIT\",\n \"author\": \"Sanic Community\",\n \"author_email\": \"[email protected]\",\n \"description\": (\n \"A web server and web framework that's written to go fast. Build fast. Run fast.\"\n ),\n \"long_description\": long_description,\n \"packages\": [\"sanic\"],\n \"platforms\": \"any\",\n \"python_requires\": \">=3.6\",\n \"classifiers\": [\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n \"entry_points\": {\"console_scripts\": [\"sanic = sanic.__main__:main\"]},\n}\n\nenv_dependency = '; sys_platform != \"win32\" ' 'and implementation_name == \"cpython\"'\nujson = \"ujson>=1.35\" + env_dependency\nuvloop = \"uvloop>=0.5.3\" + env_dependency\n\nrequirements = [\n \"httptools>=0.0.10\",\n uvloop,\n ujson,\n \"aiofiles>=0.3.0\",\n \"websockets>=7.0,<9.0\",\n \"multidict>=4.0,<5.0\",\n \"httpx==0.11.1\",\n]\n\ntests_require = [\n \"pytest==5.2.1\",\n \"multidict>=4.0,<5.0\",\n \"gunicorn\",\n \"pytest-cov\",\n \"httpcore==0.3.0\",\n \"beautifulsoup4\",\n uvloop,\n ujson,\n \"pytest-sanic\",\n \"pytest-sugar\",\n \"pytest-benchmark\",\n]\n\ndocs_require = [\n \"sphinx>=2.1.2\",\n \"sphinx_rtd_theme\",\n \"recommonmark>=0.5.0\",\n \"docutils\",\n \"pygments\",\n]\n\ndev_require = tests_require + [\n \"aiofiles\",\n \"tox\",\n \"black\",\n \"flake8\",\n \"bandit\",\n \"towncrier\",\n]\n\nall_require = dev_require + docs_require\n\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n tests_require.remove(ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n tests_require.remove(uvloop)\n\nextras_require = {\n \"test\": tests_require,\n \"dev\": dev_require,\n \"docs\": docs_require,\n \"all\": all_require,\n}\n\nsetup_kwargs[\"install_requires\"] = requirements\nsetup_kwargs[\"tests_require\"] = tests_require\nsetup_kwargs[\"extras_require\"] = extras_require\nsetup_kwargs[\"cmdclass\"] = {\"test\": PyTest}\nsetup(**setup_kwargs)\n", "path": "setup.py"}]} | 3,407 | 209 |
gh_patches_debug_29685 | rasdani/github-patches | git_diff | avocado-framework__avocado-4644 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Functional test for requirements resolver fails easily
The PASS or FAIL result for the functional tests for the requirement resolver is very much a hit/miss, but it's somewhat easy to reproduce with:
```
$ export CI=1
$ dnf remove hello
$ dnf clean all
$ avocado run --test-runner=nrunner selftests/functional/test_requirements.py
JOB ID : 35d4cf58034c04eb47be0276197f8ae5f17af82b
JOB LOG : /home/cleber/avocado/job-results/job-2021-05-26T23.55-35d4cf5/job.log
(3/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_success: STARTED
(2/4) selftests/functional/test_requirements.py:BasicTest.test_single_fail: STARTED
(1/4) selftests/functional/test_requirements.py:BasicTest.test_single_success: STARTED
(4/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_fails: STARTED
(2/4) selftests/functional/test_requirements.py:BasicTest.test_single_fail: PASS (4.75 s)
(1/4) selftests/functional/test_requirements.py:BasicTest.test_single_success: PASS (5.29 s)
(3/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_success: FAIL (14.22 s)
(4/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_fails: PASS (20.34 s)
RESULTS : PASS 3 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML : /home/cleber/avocado/job-results/job-2021-05-26T23.55-35d4cf5/results.html
JOB TIME : 21.52 s
```
The reason for the failure is that, multiple tests contain:
```
:avocado: requirement={"type": "package", "name": "hello"}
```
And because the nrunner will run those tests in parallel, it will also run those requirements in parallel. But, given that dnf will hold a systemwide lock, the others can fail. This is reproducible if you try to run multiple `avocado-runner-requirement-package` in parallel manually, such as this which succeeds:
```
{'status': 'running', 'time': 1345096.780925351}
{'status': 'running', 'time': 1345097.281267814}
{'type': 'stdout', 'log': b'Package(s) hello installed successfully', 'status': 'running', 'time': 1345097.281664319}
{'type': 'stderr', 'log': b'', 'status': 'running', 'time': 1345097.281689778}
{'result': 'pass', 'status': 'finished', 'time': 1345097.281706051}
```
And this one that ends up failing:
```
{'status': 'running', 'time': 1345097.18655851}
{'status': 'running', 'time': 1345097.687243183}
{'status': 'running', 'time': 1345098.187887105}
{'type': 'stdout', 'log': b'', 'status': 'running', 'time': 1345098.188174427}
{'type': 'stderr', 'log': b'Failed to install hello. Check the package(s) name and if sudo permission is granted.', 'status': 'running', 'time': 1345098.188197785}
{'result': 'error', 'status': 'finished', 'time': 1345098.188217684}
```
This issue became apparent because of the problems addressed in #4619 which causes test errors in nrunner to really be signaled as an exit code.
</issue>
<code>
[start of avocado/core/runners/requirement_package.py]
1 import time
2 from multiprocessing import Process, SimpleQueue
3
4 from ...utils.software_manager.main import MESSAGES
5 from ...utils.software_manager.manager import SoftwareManager
6 from .. import nrunner
7
8
9 class RequirementPackageRunner(nrunner.BaseRunner):
10 """Runner for requirements of type package
11
12 This runner handles, the installation, verification and removal of
13 packages using the avocado-software-manager.
14
15 Runnable attributes usage:
16
17 * kind: 'requirement-package'
18
19 * uri: not used
20
21 * args: not used
22
23 * kwargs:
24 - name: the package name (required)
25 - action: one of 'install', 'check', or 'remove' (optional, defaults
26 to 'install')
27 """
28
29 @staticmethod
30 def _check(software_manager, package):
31 if software_manager.check_installed(package):
32 result = 'pass'
33 stdout = MESSAGES['check-installed']['success'] % package
34 stderr = ''
35 else:
36 result = 'error'
37 stdout = ''
38 stderr = MESSAGES['check-installed']['fail'] % package
39 return result, stdout, stderr
40
41 @staticmethod
42 def _install(software_manager, cmd, package):
43 result = 'pass'
44 stderr = ''
45 if not software_manager.check_installed(package):
46 if software_manager.install(package):
47 stdout = MESSAGES[cmd]['success'] % package
48 else:
49 result = 'error'
50 stdout = ''
51 stderr = MESSAGES[cmd]['fail'] % package
52 else:
53 stdout = MESSAGES['check-installed']['success'] % package
54 return result, stdout, stderr
55
56 @staticmethod
57 def _remove(software_manager, cmd, package):
58 result = 'pass'
59 stderr = ''
60 if software_manager.check_installed(package):
61 if software_manager.remove(package):
62 stdout = MESSAGES[cmd]['success'] % package
63 else:
64 result = 'error'
65 stdout = ''
66 stderr = MESSAGES[cmd]['fail'] % package
67 else:
68 stdout = MESSAGES['check-installed']['fail'] % package
69 return result, stdout, stderr
70
71 def _run_software_manager(self, cmd, package, queue):
72 software_manager = SoftwareManager()
73
74 if not software_manager.is_capable():
75 output = {'result': 'error',
76 'stdout': '',
77 'stderr': ('Package manager not supported or not'
78 ' available.')}
79 queue.put(output)
80
81 if cmd == 'install':
82 result, stdout, stderr = self._install(software_manager, cmd,
83 package)
84
85 elif cmd == 'remove':
86 result, stdout, stderr = self._remove(software_manager, cmd,
87 package)
88
89 elif cmd == 'check':
90 result, stdout, stderr = self._check(software_manager, package)
91
92 output = {'result': result,
93 'stdout': stdout,
94 'stderr': stderr}
95 queue.put(output)
96
97 def run(self):
98 yield self.prepare_status('started')
99 # check if there is a valid 'action' argument
100 cmd = self.runnable.kwargs.get('action', 'install')
101 # avoid invalid arguments
102 if cmd not in ['install', 'check', 'remove']:
103 stderr = ("Invalid action %s. Use one of 'install', 'check' or"
104 " 'remove'" % cmd)
105 yield self.prepare_status('running',
106 {'type': 'stderr', 'log': stderr})
107 yield self.prepare_status('finished', {'result': 'error'})
108 return
109
110 package = self.runnable.kwargs.get('name')
111 # if package was passed correctly, run avocado-software-manager
112 if package is not None:
113 # let's spawn it to another process to be able to update the
114 # status messages and avoid the software-manager to lock this
115 # process
116 queue = SimpleQueue()
117 process = Process(target=self._run_software_manager,
118 args=(cmd, package, queue))
119 process.start()
120
121 while queue.empty():
122 time.sleep(nrunner.RUNNER_RUN_STATUS_INTERVAL)
123 yield self.prepare_status('running')
124
125 output = queue.get()
126 result = output['result']
127 stdout = output['stdout']
128 stderr = output['stderr']
129 else:
130 # Otherwise, log the missing package name
131 result = 'error'
132 stdout = ''
133 stderr = ('Package name should be passed as kwargs using'
134 ' name="package_name".')
135
136 yield self.prepare_status('running',
137 {'type': 'stdout',
138 'log': stdout.encode()})
139 yield self.prepare_status('running',
140 {'type': 'stderr',
141 'log': stderr.encode()})
142 yield self.prepare_status('finished', {'result': result})
143
144
145 class RunnerApp(nrunner.BaseRunnerApp):
146 PROG_NAME = 'avocado-runner-requirement-package'
147 PROG_DESCRIPTION = ('nrunner application for requirements of type package')
148 RUNNABLE_KINDS_CAPABLE = {'requirement-package': RequirementPackageRunner}
149
150
151 def main():
152 nrunner.main(RunnerApp)
153
154
155 if __name__ == '__main__':
156 main()
157
[end of avocado/core/runners/requirement_package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/avocado/core/runners/requirement_package.py b/avocado/core/runners/requirement_package.py
--- a/avocado/core/runners/requirement_package.py
+++ b/avocado/core/runners/requirement_package.py
@@ -46,9 +46,14 @@
if software_manager.install(package):
stdout = MESSAGES[cmd]['success'] % package
else:
- result = 'error'
- stdout = ''
- stderr = MESSAGES[cmd]['fail'] % package
+ # check if the error is a false negative because of package
+ # installation collision
+ if software_manager.check_installed(package):
+ stdout = MESSAGES[cmd]['success'] % package
+ else:
+ result = 'error'
+ stdout = ''
+ stderr = MESSAGES[cmd]['fail'] % package
else:
stdout = MESSAGES['check-installed']['success'] % package
return result, stdout, stderr
@@ -61,9 +66,14 @@
if software_manager.remove(package):
stdout = MESSAGES[cmd]['success'] % package
else:
- result = 'error'
- stdout = ''
- stderr = MESSAGES[cmd]['fail'] % package
+ # check if the error is a false negative because of package
+ # installation collision
+ if not software_manager.check_installed(package):
+ stdout = MESSAGES[cmd]['success'] % package
+ else:
+ result = 'error'
+ stdout = ''
+ stderr = MESSAGES[cmd]['fail'] % package
else:
stdout = MESSAGES['check-installed']['fail'] % package
return result, stdout, stderr
| {"golden_diff": "diff --git a/avocado/core/runners/requirement_package.py b/avocado/core/runners/requirement_package.py\n--- a/avocado/core/runners/requirement_package.py\n+++ b/avocado/core/runners/requirement_package.py\n@@ -46,9 +46,14 @@\n if software_manager.install(package):\n stdout = MESSAGES[cmd]['success'] % package\n else:\n- result = 'error'\n- stdout = ''\n- stderr = MESSAGES[cmd]['fail'] % package\n+ # check if the error is a false negative because of package\n+ # installation collision\n+ if software_manager.check_installed(package):\n+ stdout = MESSAGES[cmd]['success'] % package\n+ else:\n+ result = 'error'\n+ stdout = ''\n+ stderr = MESSAGES[cmd]['fail'] % package\n else:\n stdout = MESSAGES['check-installed']['success'] % package\n return result, stdout, stderr\n@@ -61,9 +66,14 @@\n if software_manager.remove(package):\n stdout = MESSAGES[cmd]['success'] % package\n else:\n- result = 'error'\n- stdout = ''\n- stderr = MESSAGES[cmd]['fail'] % package\n+ # check if the error is a false negative because of package\n+ # installation collision\n+ if not software_manager.check_installed(package):\n+ stdout = MESSAGES[cmd]['success'] % package\n+ else:\n+ result = 'error'\n+ stdout = ''\n+ stderr = MESSAGES[cmd]['fail'] % package\n else:\n stdout = MESSAGES['check-installed']['fail'] % package\n return result, stdout, stderr\n", "issue": "Functional test for requirements resolver fails easily\nThe PASS or FAIL result for the functional tests for the requirement resolver is very much a hit/miss, but it's somewhat easy to reproduce with:\r\n\r\n```\r\n$ export CI=1\r\n$ dnf remove hello\r\n$ dnf clean all\r\n$ avocado run --test-runner=nrunner selftests/functional/test_requirements.py \r\nJOB ID : 35d4cf58034c04eb47be0276197f8ae5f17af82b\r\nJOB LOG : /home/cleber/avocado/job-results/job-2021-05-26T23.55-35d4cf5/job.log\r\n (3/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_success: STARTED\r\n (2/4) selftests/functional/test_requirements.py:BasicTest.test_single_fail: STARTED\r\n (1/4) selftests/functional/test_requirements.py:BasicTest.test_single_success: STARTED\r\n (4/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_fails: STARTED\r\n (2/4) selftests/functional/test_requirements.py:BasicTest.test_single_fail: PASS (4.75 s)\r\n (1/4) selftests/functional/test_requirements.py:BasicTest.test_single_success: PASS (5.29 s)\r\n (3/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_success: FAIL (14.22 s)\r\n (4/4) selftests/functional/test_requirements.py:BasicTest.test_multiple_fails: PASS (20.34 s)\r\nRESULTS : PASS 3 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0\r\nJOB HTML : /home/cleber/avocado/job-results/job-2021-05-26T23.55-35d4cf5/results.html\r\nJOB TIME : 21.52 s\r\n```\r\n\r\nThe reason for the failure is that, multiple tests contain:\r\n\r\n```\r\n:avocado: requirement={\"type\": \"package\", \"name\": \"hello\"}\r\n```\r\n\r\nAnd because the nrunner will run those tests in parallel, it will also run those requirements in parallel. But, given that dnf will hold a systemwide lock, the others can fail. This is reproducible if you try to run multiple `avocado-runner-requirement-package` in parallel manually, such as this which succeeds:\r\n\r\n```\r\n{'status': 'running', 'time': 1345096.780925351}\r\n{'status': 'running', 'time': 1345097.281267814}\r\n{'type': 'stdout', 'log': b'Package(s) hello installed successfully', 'status': 'running', 'time': 1345097.281664319}\r\n{'type': 'stderr', 'log': b'', 'status': 'running', 'time': 1345097.281689778}\r\n{'result': 'pass', 'status': 'finished', 'time': 1345097.281706051}\r\n```\r\n\r\nAnd this one that ends up failing:\r\n\r\n```\r\n{'status': 'running', 'time': 1345097.18655851}\r\n{'status': 'running', 'time': 1345097.687243183}\r\n{'status': 'running', 'time': 1345098.187887105}\r\n{'type': 'stdout', 'log': b'', 'status': 'running', 'time': 1345098.188174427}\r\n{'type': 'stderr', 'log': b'Failed to install hello. Check the package(s) name and if sudo permission is granted.', 'status': 'running', 'time': 1345098.188197785}\r\n{'result': 'error', 'status': 'finished', 'time': 1345098.188217684}\r\n```\r\n\r\nThis issue became apparent because of the problems addressed in #4619 which causes test errors in nrunner to really be signaled as an exit code.\n", "before_files": [{"content": "import time\nfrom multiprocessing import Process, SimpleQueue\n\nfrom ...utils.software_manager.main import MESSAGES\nfrom ...utils.software_manager.manager import SoftwareManager\nfrom .. import nrunner\n\n\nclass RequirementPackageRunner(nrunner.BaseRunner):\n \"\"\"Runner for requirements of type package\n\n This runner handles, the installation, verification and removal of\n packages using the avocado-software-manager.\n\n Runnable attributes usage:\n\n * kind: 'requirement-package'\n\n * uri: not used\n\n * args: not used\n\n * kwargs:\n - name: the package name (required)\n - action: one of 'install', 'check', or 'remove' (optional, defaults\n to 'install')\n \"\"\"\n\n @staticmethod\n def _check(software_manager, package):\n if software_manager.check_installed(package):\n result = 'pass'\n stdout = MESSAGES['check-installed']['success'] % package\n stderr = ''\n else:\n result = 'error'\n stdout = ''\n stderr = MESSAGES['check-installed']['fail'] % package\n return result, stdout, stderr\n\n @staticmethod\n def _install(software_manager, cmd, package):\n result = 'pass'\n stderr = ''\n if not software_manager.check_installed(package):\n if software_manager.install(package):\n stdout = MESSAGES[cmd]['success'] % package\n else:\n result = 'error'\n stdout = ''\n stderr = MESSAGES[cmd]['fail'] % package\n else:\n stdout = MESSAGES['check-installed']['success'] % package\n return result, stdout, stderr\n\n @staticmethod\n def _remove(software_manager, cmd, package):\n result = 'pass'\n stderr = ''\n if software_manager.check_installed(package):\n if software_manager.remove(package):\n stdout = MESSAGES[cmd]['success'] % package\n else:\n result = 'error'\n stdout = ''\n stderr = MESSAGES[cmd]['fail'] % package\n else:\n stdout = MESSAGES['check-installed']['fail'] % package\n return result, stdout, stderr\n\n def _run_software_manager(self, cmd, package, queue):\n software_manager = SoftwareManager()\n\n if not software_manager.is_capable():\n output = {'result': 'error',\n 'stdout': '',\n 'stderr': ('Package manager not supported or not'\n ' available.')}\n queue.put(output)\n\n if cmd == 'install':\n result, stdout, stderr = self._install(software_manager, cmd,\n package)\n\n elif cmd == 'remove':\n result, stdout, stderr = self._remove(software_manager, cmd,\n package)\n\n elif cmd == 'check':\n result, stdout, stderr = self._check(software_manager, package)\n\n output = {'result': result,\n 'stdout': stdout,\n 'stderr': stderr}\n queue.put(output)\n\n def run(self):\n yield self.prepare_status('started')\n # check if there is a valid 'action' argument\n cmd = self.runnable.kwargs.get('action', 'install')\n # avoid invalid arguments\n if cmd not in ['install', 'check', 'remove']:\n stderr = (\"Invalid action %s. Use one of 'install', 'check' or\"\n \" 'remove'\" % cmd)\n yield self.prepare_status('running',\n {'type': 'stderr', 'log': stderr})\n yield self.prepare_status('finished', {'result': 'error'})\n return\n\n package = self.runnable.kwargs.get('name')\n # if package was passed correctly, run avocado-software-manager\n if package is not None:\n # let's spawn it to another process to be able to update the\n # status messages and avoid the software-manager to lock this\n # process\n queue = SimpleQueue()\n process = Process(target=self._run_software_manager,\n args=(cmd, package, queue))\n process.start()\n\n while queue.empty():\n time.sleep(nrunner.RUNNER_RUN_STATUS_INTERVAL)\n yield self.prepare_status('running')\n\n output = queue.get()\n result = output['result']\n stdout = output['stdout']\n stderr = output['stderr']\n else:\n # Otherwise, log the missing package name\n result = 'error'\n stdout = ''\n stderr = ('Package name should be passed as kwargs using'\n ' name=\"package_name\".')\n\n yield self.prepare_status('running',\n {'type': 'stdout',\n 'log': stdout.encode()})\n yield self.prepare_status('running',\n {'type': 'stderr',\n 'log': stderr.encode()})\n yield self.prepare_status('finished', {'result': result})\n\n\nclass RunnerApp(nrunner.BaseRunnerApp):\n PROG_NAME = 'avocado-runner-requirement-package'\n PROG_DESCRIPTION = ('nrunner application for requirements of type package')\n RUNNABLE_KINDS_CAPABLE = {'requirement-package': RequirementPackageRunner}\n\n\ndef main():\n nrunner.main(RunnerApp)\n\n\nif __name__ == '__main__':\n main()\n", "path": "avocado/core/runners/requirement_package.py"}]} | 2,983 | 381 |
gh_patches_debug_64110 | rasdani/github-patches | git_diff | projectmesa__mesa-561 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update tests to use pytest, not nose
Update tests to use pytest, not nose. nose is not maintained anymore.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 import re
4
5 from setuptools import setup, find_packages
6 from codecs import open
7
8 requires = [
9 'click',
10 'cookiecutter',
11 'jupyter',
12 'networkx',
13 'numpy',
14 'pandas',
15 'tornado >= 4.2, < 5.0.0',
16 'tqdm',
17 ]
18
19 extras_require = {
20 'dev': [
21 'coverage',
22 'flake8',
23 'nose',
24 'sphinx',
25 ],
26 'docs': [
27 'sphinx',
28 ]
29 }
30
31 version = ''
32 with open('mesa/__init__.py', 'r') as fd:
33 version = re.search(r'^__version__\s*=\s*[\'"]([^\'"]*)[\'"]',
34 fd.read(), re.MULTILINE).group(1)
35
36 with open('README.rst', 'rb', encoding='utf-8') as f:
37 readme = f.read()
38
39 setup(
40 name='Mesa',
41 version=version,
42 description="Agent-based modeling (ABM) in Python 3+",
43 long_description=readme,
44 author='Project Mesa Team',
45 author_email='[email protected]',
46 url='https://github.com/projectmesa/mesa',
47 packages=find_packages(),
48 package_data={'mesa': ['visualization/templates/*.html', 'visualization/templates/css/*',
49 'visualization/templates/fonts/*', 'visualization/templates/js/*'],
50 'cookiecutter-mesa': ['cookiecutter-mesa/*']},
51 include_package_data=True,
52 install_requires=requires,
53 extras_require=extras_require,
54 keywords='agent based modeling model ABM simulation multi-agent',
55 license='Apache 2.0',
56 zip_safe=False,
57 classifiers=[
58 'Topic :: Scientific/Engineering',
59 'Topic :: Scientific/Engineering :: Artificial Life',
60 'Topic :: Scientific/Engineering :: Artificial Intelligence',
61 'Intended Audience :: Science/Research',
62 'Programming Language :: Python :: 3 :: Only',
63 'License :: OSI Approved :: Apache Software License',
64 'Operating System :: OS Independent',
65 'Development Status :: 3 - Alpha',
66 'Natural Language :: English',
67 ],
68 entry_points='''
69 [console_scripts]
70 mesa=mesa.main:cli
71 ''',
72 )
73
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,8 @@
'dev': [
'coverage',
'flake8',
- 'nose',
+ 'pytest',
+ 'pytest-cov',
'sphinx',
],
'docs': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,7 +20,8 @@\n 'dev': [\n 'coverage',\n 'flake8',\n- 'nose',\n+ 'pytest',\n+ 'pytest-cov',\n 'sphinx',\n ],\n 'docs': [\n", "issue": "Update tests to use pytest, not nose\nUpdate tests to use pytest, not nose. nose is not maintained anymore. \n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nimport re\n\nfrom setuptools import setup, find_packages\nfrom codecs import open\n\nrequires = [\n 'click',\n 'cookiecutter',\n 'jupyter',\n 'networkx',\n 'numpy',\n 'pandas',\n 'tornado >= 4.2, < 5.0.0',\n 'tqdm',\n]\n\nextras_require = {\n 'dev': [\n 'coverage',\n 'flake8',\n 'nose',\n 'sphinx',\n ],\n 'docs': [\n 'sphinx',\n ]\n}\n\nversion = ''\nwith open('mesa/__init__.py', 'r') as fd:\n version = re.search(r'^__version__\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]',\n fd.read(), re.MULTILINE).group(1)\n\nwith open('README.rst', 'rb', encoding='utf-8') as f:\n readme = f.read()\n\nsetup(\n name='Mesa',\n version=version,\n description=\"Agent-based modeling (ABM) in Python 3+\",\n long_description=readme,\n author='Project Mesa Team',\n author_email='[email protected]',\n url='https://github.com/projectmesa/mesa',\n packages=find_packages(),\n package_data={'mesa': ['visualization/templates/*.html', 'visualization/templates/css/*',\n 'visualization/templates/fonts/*', 'visualization/templates/js/*'],\n 'cookiecutter-mesa': ['cookiecutter-mesa/*']},\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n keywords='agent based modeling model ABM simulation multi-agent',\n license='Apache 2.0',\n zip_safe=False,\n classifiers=[\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Life',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3 :: Only',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Development Status :: 3 - Alpha',\n 'Natural Language :: English',\n ],\n entry_points='''\n [console_scripts]\n mesa=mesa.main:cli\n ''',\n)\n", "path": "setup.py"}]} | 1,193 | 76 |
gh_patches_debug_20766 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-2069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Recommended change to 3.8.6 or above
https://github.com/microsoft/botbuilder-python/blob/7b064bb9f916afc10e931f3713183f57e1d7ca47/libraries/botbuilder-integration-aiohttp/setup.py#L13
I have a conflict when introducing llamaindex, which requires version 3.8.6 or higher!
</issue>
<code>
[start of libraries/botbuilder-integration-aiohttp/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 VERSION = os.environ["packageVersion"] if "packageVersion" in os.environ else "4.15.0"
8 REQUIRES = [
9 "botbuilder-schema==4.15.0",
10 "botframework-connector==4.15.0",
11 "botbuilder-core==4.15.0",
12 "yarl>=1.8.1",
13 "aiohttp==3.8.5",
14 ]
15
16 root = os.path.abspath(os.path.dirname(__file__))
17
18 with open(os.path.join(root, "botbuilder", "integration", "aiohttp", "about.py")) as f:
19 package_info = {}
20 info = f.read()
21 exec(info, package_info)
22
23 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
24 long_description = f.read()
25
26 setup(
27 name=package_info["__title__"],
28 version=package_info["__version__"],
29 url=package_info["__uri__"],
30 author=package_info["__author__"],
31 description=package_info["__description__"],
32 keywords=[
33 "BotBuilderIntegrationAiohttp",
34 "bots",
35 "ai",
36 "botframework",
37 "botbuilder",
38 ],
39 long_description=long_description,
40 long_description_content_type="text/x-rst",
41 license=package_info["__license__"],
42 packages=[
43 "botbuilder.integration.aiohttp",
44 "botbuilder.integration.aiohttp.skills",
45 "botbuilder.integration.aiohttp.streaming",
46 ],
47 install_requires=REQUIRES,
48 classifiers=[
49 "Programming Language :: Python :: 3.7",
50 "Intended Audience :: Developers",
51 "License :: OSI Approved :: MIT License",
52 "Operating System :: OS Independent",
53 "Development Status :: 5 - Production/Stable",
54 "Topic :: Scientific/Engineering :: Artificial Intelligence",
55 ],
56 )
57
[end of libraries/botbuilder-integration-aiohttp/setup.py]
[start of libraries/botbuilder-ai/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 REQUIRES = [
8 "azure-cognitiveservices-language-luis==0.2.0",
9 "botbuilder-schema==4.15.0",
10 "botbuilder-core==4.15.0",
11 "aiohttp==3.8.5",
12 ]
13
14 TESTS_REQUIRES = ["aiounittest>=1.1.0"]
15
16 root = os.path.abspath(os.path.dirname(__file__))
17
18 with open(os.path.join(root, "botbuilder", "ai", "about.py")) as f:
19 package_info = {}
20 info = f.read()
21 exec(info, package_info)
22
23 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
24 long_description = f.read()
25
26 setup(
27 name=package_info["__title__"],
28 version=package_info["__version__"],
29 url=package_info["__uri__"],
30 author=package_info["__author__"],
31 description=package_info["__description__"],
32 keywords="botbuilder-ai LUIS QnAMaker bots ai botframework botbuilder",
33 long_description=long_description,
34 long_description_content_type="text/x-rst",
35 license=package_info["__license__"],
36 packages=[
37 "botbuilder.ai",
38 "botbuilder.ai.qna",
39 "botbuilder.ai.luis",
40 "botbuilder.ai.qna.models",
41 "botbuilder.ai.qna.utils",
42 "botbuilder.ai.qna.dialogs",
43 ],
44 install_requires=REQUIRES + TESTS_REQUIRES,
45 tests_require=TESTS_REQUIRES,
46 include_package_data=True,
47 classifiers=[
48 "Programming Language :: Python :: 3.7",
49 "Intended Audience :: Developers",
50 "License :: OSI Approved :: MIT License",
51 "Operating System :: OS Independent",
52 "Development Status :: 5 - Production/Stable",
53 "Topic :: Scientific/Engineering :: Artificial Intelligence",
54 ],
55 )
56
[end of libraries/botbuilder-ai/setup.py]
[start of libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import os
5 from setuptools import setup
6
7 REQUIRES = [
8 "applicationinsights>=0.11.9",
9 "aiohttp==3.8.5",
10 "botbuilder-schema==4.15.0",
11 "botframework-connector==4.15.0",
12 "botbuilder-core==4.15.0",
13 "botbuilder-applicationinsights==4.15.0",
14 ]
15 TESTS_REQUIRES = [
16 "aiounittest==1.3.0",
17 ]
18
19 root = os.path.abspath(os.path.dirname(__file__))
20
21 with open(
22 os.path.join(
23 root, "botbuilder", "integration", "applicationinsights", "aiohttp", "about.py"
24 )
25 ) as f:
26 package_info = {}
27 info = f.read()
28 exec(info, package_info)
29
30 with open(os.path.join(root, "README.rst"), encoding="utf-8") as f:
31 long_description = f.read()
32
33 setup(
34 name=package_info["__title__"],
35 version=package_info["__version__"],
36 url=package_info["__uri__"],
37 author=package_info["__author__"],
38 description=package_info["__description__"],
39 keywords=[
40 "BotBuilderApplicationInsights",
41 "bots",
42 "ai",
43 "botframework",
44 "botbuilder",
45 "aiohttp",
46 ],
47 long_description=long_description,
48 long_description_content_type="text/x-rst",
49 license=package_info["__license__"],
50 packages=["botbuilder.integration.applicationinsights.aiohttp"],
51 install_requires=REQUIRES + TESTS_REQUIRES,
52 tests_require=TESTS_REQUIRES,
53 include_package_data=True,
54 classifiers=[
55 "Programming Language :: Python :: 3.7",
56 "Intended Audience :: Developers",
57 "License :: OSI Approved :: MIT License",
58 "Operating System :: OS Independent",
59 "Development Status :: 5 - Production/Stable",
60 "Topic :: Scientific/Engineering :: Artificial Intelligence",
61 ],
62 )
63
[end of libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botbuilder-ai/setup.py b/libraries/botbuilder-ai/setup.py
--- a/libraries/botbuilder-ai/setup.py
+++ b/libraries/botbuilder-ai/setup.py
@@ -8,7 +8,7 @@
"azure-cognitiveservices-language-luis==0.2.0",
"botbuilder-schema==4.15.0",
"botbuilder-core==4.15.0",
- "aiohttp==3.8.5",
+ "aiohttp==3.9.3",
]
TESTS_REQUIRES = ["aiounittest>=1.1.0"]
diff --git a/libraries/botbuilder-integration-aiohttp/setup.py b/libraries/botbuilder-integration-aiohttp/setup.py
--- a/libraries/botbuilder-integration-aiohttp/setup.py
+++ b/libraries/botbuilder-integration-aiohttp/setup.py
@@ -10,7 +10,7 @@
"botframework-connector==4.15.0",
"botbuilder-core==4.15.0",
"yarl>=1.8.1",
- "aiohttp==3.8.5",
+ "aiohttp==3.9.3",
]
root = os.path.abspath(os.path.dirname(__file__))
diff --git a/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py b/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py
--- a/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py
+++ b/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py
@@ -6,7 +6,7 @@
REQUIRES = [
"applicationinsights>=0.11.9",
- "aiohttp==3.8.5",
+ "aiohttp==3.9.3",
"botbuilder-schema==4.15.0",
"botframework-connector==4.15.0",
"botbuilder-core==4.15.0",
| {"golden_diff": "diff --git a/libraries/botbuilder-ai/setup.py b/libraries/botbuilder-ai/setup.py\n--- a/libraries/botbuilder-ai/setup.py\n+++ b/libraries/botbuilder-ai/setup.py\n@@ -8,7 +8,7 @@\n \"azure-cognitiveservices-language-luis==0.2.0\",\n \"botbuilder-schema==4.15.0\",\n \"botbuilder-core==4.15.0\",\n- \"aiohttp==3.8.5\",\n+ \"aiohttp==3.9.3\",\n ]\n \n TESTS_REQUIRES = [\"aiounittest>=1.1.0\"]\ndiff --git a/libraries/botbuilder-integration-aiohttp/setup.py b/libraries/botbuilder-integration-aiohttp/setup.py\n--- a/libraries/botbuilder-integration-aiohttp/setup.py\n+++ b/libraries/botbuilder-integration-aiohttp/setup.py\n@@ -10,7 +10,7 @@\n \"botframework-connector==4.15.0\",\n \"botbuilder-core==4.15.0\",\n \"yarl>=1.8.1\",\n- \"aiohttp==3.8.5\",\n+ \"aiohttp==3.9.3\",\n ]\n \n root = os.path.abspath(os.path.dirname(__file__))\ndiff --git a/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py b/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py\n--- a/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py\n+++ b/libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py\n@@ -6,7 +6,7 @@\n \n REQUIRES = [\n \"applicationinsights>=0.11.9\",\n- \"aiohttp==3.8.5\",\n+ \"aiohttp==3.9.3\",\n \"botbuilder-schema==4.15.0\",\n \"botframework-connector==4.15.0\",\n \"botbuilder-core==4.15.0\",\n", "issue": "Recommended change to 3.8.6 or above\nhttps://github.com/microsoft/botbuilder-python/blob/7b064bb9f916afc10e931f3713183f57e1d7ca47/libraries/botbuilder-integration-aiohttp/setup.py#L13\r\n\r\nI have a conflict when introducing llamaindex, which requires version 3.8.6 or higher!\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.15.0\"\nREQUIRES = [\n \"botbuilder-schema==4.15.0\",\n \"botframework-connector==4.15.0\",\n \"botbuilder-core==4.15.0\",\n \"yarl>=1.8.1\",\n \"aiohttp==3.8.5\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"botbuilder\", \"integration\", \"aiohttp\", \"about.py\")) as f:\n package_info = {}\n info = f.read()\n exec(info, package_info)\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=package_info[\"__title__\"],\n version=package_info[\"__version__\"],\n url=package_info[\"__uri__\"],\n author=package_info[\"__author__\"],\n description=package_info[\"__description__\"],\n keywords=[\n \"BotBuilderIntegrationAiohttp\",\n \"bots\",\n \"ai\",\n \"botframework\",\n \"botbuilder\",\n ],\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=package_info[\"__license__\"],\n packages=[\n \"botbuilder.integration.aiohttp\",\n \"botbuilder.integration.aiohttp.skills\",\n \"botbuilder.integration.aiohttp.streaming\",\n ],\n install_requires=REQUIRES,\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botbuilder-integration-aiohttp/setup.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nREQUIRES = [\n \"azure-cognitiveservices-language-luis==0.2.0\",\n \"botbuilder-schema==4.15.0\",\n \"botbuilder-core==4.15.0\",\n \"aiohttp==3.8.5\",\n]\n\nTESTS_REQUIRES = [\"aiounittest>=1.1.0\"]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"botbuilder\", \"ai\", \"about.py\")) as f:\n package_info = {}\n info = f.read()\n exec(info, package_info)\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=package_info[\"__title__\"],\n version=package_info[\"__version__\"],\n url=package_info[\"__uri__\"],\n author=package_info[\"__author__\"],\n description=package_info[\"__description__\"],\n keywords=\"botbuilder-ai LUIS QnAMaker bots ai botframework botbuilder\",\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=package_info[\"__license__\"],\n packages=[\n \"botbuilder.ai\",\n \"botbuilder.ai.qna\",\n \"botbuilder.ai.luis\",\n \"botbuilder.ai.qna.models\",\n \"botbuilder.ai.qna.utils\",\n \"botbuilder.ai.qna.dialogs\",\n ],\n install_requires=REQUIRES + TESTS_REQUIRES,\n tests_require=TESTS_REQUIRES,\n include_package_data=True,\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botbuilder-ai/setup.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport os\nfrom setuptools import setup\n\nREQUIRES = [\n \"applicationinsights>=0.11.9\",\n \"aiohttp==3.8.5\",\n \"botbuilder-schema==4.15.0\",\n \"botframework-connector==4.15.0\",\n \"botbuilder-core==4.15.0\",\n \"botbuilder-applicationinsights==4.15.0\",\n]\nTESTS_REQUIRES = [\n \"aiounittest==1.3.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(\n os.path.join(\n root, \"botbuilder\", \"integration\", \"applicationinsights\", \"aiohttp\", \"about.py\"\n )\n) as f:\n package_info = {}\n info = f.read()\n exec(info, package_info)\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=package_info[\"__title__\"],\n version=package_info[\"__version__\"],\n url=package_info[\"__uri__\"],\n author=package_info[\"__author__\"],\n description=package_info[\"__description__\"],\n keywords=[\n \"BotBuilderApplicationInsights\",\n \"bots\",\n \"ai\",\n \"botframework\",\n \"botbuilder\",\n \"aiohttp\",\n ],\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=package_info[\"__license__\"],\n packages=[\"botbuilder.integration.applicationinsights.aiohttp\"],\n install_requires=REQUIRES + TESTS_REQUIRES,\n tests_require=TESTS_REQUIRES,\n include_package_data=True,\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "libraries/botbuilder-integration-applicationinsights-aiohttp/setup.py"}]} | 2,351 | 460 |
gh_patches_debug_33101 | rasdani/github-patches | git_diff | pypa__virtualenv-1578 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
virtualenv 20: is the symlink hack really worth it?
I did some timing and it seems like the trouble it causes is not really worth it -- at the very least I'd like an option which copies instead of symlinks
Here's some timing I did to try and guage the differences -- since there's no options I could find I toggled this line to `if False` to get my "copy" data: https://github.com/pypa/virtualenv/blob/8c2985c2946e767bb6f74a7e22f51add17b38987/src/virtualenv/seed/via_app_data/via_app_data.py#L92
### with symlinks
my platform for this example is relatively low powered, a 2015 MBP
```console
$ rm -rf vvv; time virtualenv vvv
real 0m0.128s
user 0m0.107s
sys 0m0.023s
$ rm -rf vvv; time virtualenv vvv
real 0m0.128s
user 0m0.118s
sys 0m0.012s
$ rm -rf vvv; time virtualenv vvv
real 0m0.123s
user 0m0.121s
sys 0m0.004s
$ rm -rf vvv; time virtualenv vvv
real 0m0.119s
user 0m0.117s
sys 0m0.004s
$ rm -rf vvv; time virtualenv vvv
real 0m0.127s
user 0m0.109s
sys 0m0.020s
```
disk usage:
```console
$ du -hs vvv
128K vvv
```
problems this can cause:
```console
$ # copied to same path on other machine
$ ./vvv/bin/python -c 'import setuptools'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'setuptools'
$ ./vvv/bin/pip --help
Traceback (most recent call last):
File "./vvv/bin/pip", line 6, in <module>
from pip._internal.cli.main import main
ModuleNotFoundError: No module named 'pip'
```
### with copies
```console
$ rm -rf vvv; time virtualenv vvv
real 0m0.179s
user 0m0.155s
sys 0m0.050s
$ rm -rf vvv; time virtualenv vvv
real 0m0.185s
user 0m0.158s
sys 0m0.050s
$ rm -rf vvv; time virtualenv vvv
real 0m0.183s
user 0m0.160s
sys 0m0.048s
$ rm -rf vvv; time virtualenv vvv
real 0m0.172s
user 0m0.162s
sys 0m0.035s
$ rm -rf vvv; time virtualenv vvv
real 0m0.181s
user 0m0.142s
sys 0m0.065s
```
```console
$ du -hs vvv
7.5M vvv
```
### trade off
so we're looking at ~60ms of time overhead -- which (imo) isn't that much -- the disk usage is another concern but we're still taking that usage one way or another
### other considerations
hardlinks would be another consideration -- it would alleviate the problems I have with symlinks (caches, using virtualenv as a deployment mechanism, etc.) -- I'd have to do some implementation work to verify that case
</issue>
<code>
[start of src/virtualenv/seed/via_app_data/via_app_data.py]
1 """Bootstrap"""
2 from __future__ import absolute_import, unicode_literals
3
4 import logging
5 import shutil
6 from contextlib import contextmanager
7 from threading import Lock, Thread
8
9 import six
10
11 from virtualenv.dirs import default_data_dir
12 from virtualenv.seed.embed.base_embed import BaseEmbed
13 from virtualenv.seed.embed.wheels.acquire import get_wheels
14
15 from .pip_install.copy import CopyPipInstall
16 from .pip_install.symlink import SymlinkPipInstall
17
18
19 class FromAppData(BaseEmbed):
20 def __init__(self, options):
21 super(FromAppData, self).__init__(options)
22 self.clear = options.clear_app_data
23 self.app_data_dir = default_data_dir() / "seed-v1"
24 self.symlinks = getattr(options, "copies", False) is False
25
26 @classmethod
27 def add_parser_arguments(cls, parser, interpreter):
28 super(FromAppData, cls).add_parser_arguments(parser, interpreter)
29 parser.add_argument(
30 "--clear-app-data",
31 dest="clear_app_data",
32 action="store_true",
33 help="clear the app data folder of seed images ({})".format((default_data_dir() / "seed-v1").path),
34 default=False,
35 )
36
37 def run(self, creator):
38 if not self.enabled:
39 return
40 base_cache = self.app_data_dir / creator.interpreter.version_release_str
41 with self._get_seed_wheels(creator, base_cache) as name_to_whl:
42 pip_version = name_to_whl["pip"].stem.split("-")[1]
43 installer_class = self.installer_class(pip_version)
44
45 def _install(name, wheel):
46 logging.debug("install %s from wheel %s via %s", name, wheel, installer_class.__name__)
47 image_folder = base_cache.path / "image" / installer_class.__name__ / wheel.stem
48 installer = installer_class(wheel, creator, image_folder)
49 if self.clear:
50 installer.clear()
51 if not installer.has_image():
52 installer.build_image()
53 installer.install()
54
55 threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())
56 for thread in threads:
57 thread.start()
58 for thread in threads:
59 thread.join()
60
61 @contextmanager
62 def _get_seed_wheels(self, creator, base_cache):
63 with base_cache.lock_for_key("wheels"):
64 wheels_to = base_cache.path / "wheels"
65 if self.clear and wheels_to.exists():
66 shutil.rmtree(six.ensure_text(str(wheels_to)))
67 wheels_to.mkdir(parents=True, exist_ok=True)
68 name_to_whl, lock = {}, Lock()
69
70 def _get(package, version):
71 result = get_wheels(
72 creator.interpreter.version_release_str,
73 wheels_to,
74 self.extra_search_dir,
75 self.download,
76 {package: version},
77 )
78 with lock:
79 name_to_whl.update(result)
80
81 threads = list(Thread(target=_get, args=(pkg, v)) for pkg, v in self.package_version().items())
82 for thread in threads:
83 thread.start()
84 for thread in threads:
85 thread.join()
86
87 yield name_to_whl
88
89 def installer_class(self, pip_version):
90 if self.symlinks:
91 # symlink support requires pip 19.3+
92 pip_version_int = tuple(int(i) for i in pip_version.split(".")[0:2])
93 if pip_version_int >= (19, 3):
94 return SymlinkPipInstall
95 return CopyPipInstall
96
97 def __unicode__(self):
98 return super(FromAppData, self).__unicode__() + " app_data_dir={}".format(self.app_data_dir.path)
99
[end of src/virtualenv/seed/via_app_data/via_app_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/virtualenv/seed/via_app_data/via_app_data.py b/src/virtualenv/seed/via_app_data/via_app_data.py
--- a/src/virtualenv/seed/via_app_data/via_app_data.py
+++ b/src/virtualenv/seed/via_app_data/via_app_data.py
@@ -9,6 +9,7 @@
import six
from virtualenv.dirs import default_data_dir
+from virtualenv.info import fs_supports_symlink
from virtualenv.seed.embed.base_embed import BaseEmbed
from virtualenv.seed.embed.wheels.acquire import get_wheels
@@ -21,7 +22,7 @@
super(FromAppData, self).__init__(options)
self.clear = options.clear_app_data
self.app_data_dir = default_data_dir() / "seed-v1"
- self.symlinks = getattr(options, "copies", False) is False
+ self.symlinks = options.symlink_app_data
@classmethod
def add_parser_arguments(cls, parser, interpreter):
@@ -33,6 +34,16 @@
help="clear the app data folder of seed images ({})".format((default_data_dir() / "seed-v1").path),
default=False,
)
+ can_symlink = fs_supports_symlink()
+ parser.add_argument(
+ "--symlink-app-data",
+ dest="symlink_app_data",
+ action="store_true" if can_symlink else "store_false",
+ help="{} symlink the python packages from the app-data folder (requires seed pip>=19.3)".format(
+ "" if can_symlink else "not supported - "
+ ),
+ default=False,
+ )
def run(self, creator):
if not self.enabled:
@@ -95,4 +106,6 @@
return CopyPipInstall
def __unicode__(self):
- return super(FromAppData, self).__unicode__() + " app_data_dir={}".format(self.app_data_dir.path)
+ return super(FromAppData, self).__unicode__() + " app_data_dir={} via={}".format(
+ self.app_data_dir.path, "symlink" if self.symlinks else "copy"
+ )
| {"golden_diff": "diff --git a/src/virtualenv/seed/via_app_data/via_app_data.py b/src/virtualenv/seed/via_app_data/via_app_data.py\n--- a/src/virtualenv/seed/via_app_data/via_app_data.py\n+++ b/src/virtualenv/seed/via_app_data/via_app_data.py\n@@ -9,6 +9,7 @@\n import six\n \n from virtualenv.dirs import default_data_dir\n+from virtualenv.info import fs_supports_symlink\n from virtualenv.seed.embed.base_embed import BaseEmbed\n from virtualenv.seed.embed.wheels.acquire import get_wheels\n \n@@ -21,7 +22,7 @@\n super(FromAppData, self).__init__(options)\n self.clear = options.clear_app_data\n self.app_data_dir = default_data_dir() / \"seed-v1\"\n- self.symlinks = getattr(options, \"copies\", False) is False\n+ self.symlinks = options.symlink_app_data\n \n @classmethod\n def add_parser_arguments(cls, parser, interpreter):\n@@ -33,6 +34,16 @@\n help=\"clear the app data folder of seed images ({})\".format((default_data_dir() / \"seed-v1\").path),\n default=False,\n )\n+ can_symlink = fs_supports_symlink()\n+ parser.add_argument(\n+ \"--symlink-app-data\",\n+ dest=\"symlink_app_data\",\n+ action=\"store_true\" if can_symlink else \"store_false\",\n+ help=\"{} symlink the python packages from the app-data folder (requires seed pip>=19.3)\".format(\n+ \"\" if can_symlink else \"not supported - \"\n+ ),\n+ default=False,\n+ )\n \n def run(self, creator):\n if not self.enabled:\n@@ -95,4 +106,6 @@\n return CopyPipInstall\n \n def __unicode__(self):\n- return super(FromAppData, self).__unicode__() + \" app_data_dir={}\".format(self.app_data_dir.path)\n+ return super(FromAppData, self).__unicode__() + \" app_data_dir={} via={}\".format(\n+ self.app_data_dir.path, \"symlink\" if self.symlinks else \"copy\"\n+ )\n", "issue": "virtualenv 20: is the symlink hack really worth it?\nI did some timing and it seems like the trouble it causes is not really worth it -- at the very least I'd like an option which copies instead of symlinks\r\n\r\nHere's some timing I did to try and guage the differences -- since there's no options I could find I toggled this line to `if False` to get my \"copy\" data: https://github.com/pypa/virtualenv/blob/8c2985c2946e767bb6f74a7e22f51add17b38987/src/virtualenv/seed/via_app_data/via_app_data.py#L92\r\n\r\n### with symlinks\r\n\r\nmy platform for this example is relatively low powered, a 2015 MBP\r\n\r\n```console\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.128s\r\nuser\t0m0.107s\r\nsys\t0m0.023s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.128s\r\nuser\t0m0.118s\r\nsys\t0m0.012s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.123s\r\nuser\t0m0.121s\r\nsys\t0m0.004s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.119s\r\nuser\t0m0.117s\r\nsys\t0m0.004s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.127s\r\nuser\t0m0.109s\r\nsys\t0m0.020s\r\n```\r\n\r\ndisk usage:\r\n\r\n```console\r\n$ du -hs vvv\r\n128K\tvvv\r\n```\r\n\r\nproblems this can cause:\r\n\r\n```console\r\n$ # copied to same path on other machine\r\n$ ./vvv/bin/python -c 'import setuptools'\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'setuptools'\r\n$ ./vvv/bin/pip --help\r\nTraceback (most recent call last):\r\n File \"./vvv/bin/pip\", line 6, in <module>\r\n from pip._internal.cli.main import main\r\nModuleNotFoundError: No module named 'pip'\r\n```\r\n\r\n### with copies\r\n\r\n```console\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.179s\r\nuser\t0m0.155s\r\nsys\t0m0.050s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.185s\r\nuser\t0m0.158s\r\nsys\t0m0.050s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.183s\r\nuser\t0m0.160s\r\nsys\t0m0.048s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.172s\r\nuser\t0m0.162s\r\nsys\t0m0.035s\r\n$ rm -rf vvv; time virtualenv vvv\r\n\r\nreal\t0m0.181s\r\nuser\t0m0.142s\r\nsys\t0m0.065s\r\n```\r\n\r\n```console\r\n$ du -hs vvv\r\n7.5M\tvvv\r\n```\r\n\r\n### trade off\r\n\r\nso we're looking at ~60ms of time overhead -- which (imo) isn't that much -- the disk usage is another concern but we're still taking that usage one way or another\r\n\r\n### other considerations\r\n\r\nhardlinks would be another consideration -- it would alleviate the problems I have with symlinks (caches, using virtualenv as a deployment mechanism, etc.) -- I'd have to do some implementation work to verify that case\n", "before_files": [{"content": "\"\"\"Bootstrap\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nimport shutil\nfrom contextlib import contextmanager\nfrom threading import Lock, Thread\n\nimport six\n\nfrom virtualenv.dirs import default_data_dir\nfrom virtualenv.seed.embed.base_embed import BaseEmbed\nfrom virtualenv.seed.embed.wheels.acquire import get_wheels\n\nfrom .pip_install.copy import CopyPipInstall\nfrom .pip_install.symlink import SymlinkPipInstall\n\n\nclass FromAppData(BaseEmbed):\n def __init__(self, options):\n super(FromAppData, self).__init__(options)\n self.clear = options.clear_app_data\n self.app_data_dir = default_data_dir() / \"seed-v1\"\n self.symlinks = getattr(options, \"copies\", False) is False\n\n @classmethod\n def add_parser_arguments(cls, parser, interpreter):\n super(FromAppData, cls).add_parser_arguments(parser, interpreter)\n parser.add_argument(\n \"--clear-app-data\",\n dest=\"clear_app_data\",\n action=\"store_true\",\n help=\"clear the app data folder of seed images ({})\".format((default_data_dir() / \"seed-v1\").path),\n default=False,\n )\n\n def run(self, creator):\n if not self.enabled:\n return\n base_cache = self.app_data_dir / creator.interpreter.version_release_str\n with self._get_seed_wheels(creator, base_cache) as name_to_whl:\n pip_version = name_to_whl[\"pip\"].stem.split(\"-\")[1]\n installer_class = self.installer_class(pip_version)\n\n def _install(name, wheel):\n logging.debug(\"install %s from wheel %s via %s\", name, wheel, installer_class.__name__)\n image_folder = base_cache.path / \"image\" / installer_class.__name__ / wheel.stem\n installer = installer_class(wheel, creator, image_folder)\n if self.clear:\n installer.clear()\n if not installer.has_image():\n installer.build_image()\n installer.install()\n\n threads = list(Thread(target=_install, args=(n, w)) for n, w in name_to_whl.items())\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n\n @contextmanager\n def _get_seed_wheels(self, creator, base_cache):\n with base_cache.lock_for_key(\"wheels\"):\n wheels_to = base_cache.path / \"wheels\"\n if self.clear and wheels_to.exists():\n shutil.rmtree(six.ensure_text(str(wheels_to)))\n wheels_to.mkdir(parents=True, exist_ok=True)\n name_to_whl, lock = {}, Lock()\n\n def _get(package, version):\n result = get_wheels(\n creator.interpreter.version_release_str,\n wheels_to,\n self.extra_search_dir,\n self.download,\n {package: version},\n )\n with lock:\n name_to_whl.update(result)\n\n threads = list(Thread(target=_get, args=(pkg, v)) for pkg, v in self.package_version().items())\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n\n yield name_to_whl\n\n def installer_class(self, pip_version):\n if self.symlinks:\n # symlink support requires pip 19.3+\n pip_version_int = tuple(int(i) for i in pip_version.split(\".\")[0:2])\n if pip_version_int >= (19, 3):\n return SymlinkPipInstall\n return CopyPipInstall\n\n def __unicode__(self):\n return super(FromAppData, self).__unicode__() + \" app_data_dir={}\".format(self.app_data_dir.path)\n", "path": "src/virtualenv/seed/via_app_data/via_app_data.py"}]} | 2,448 | 491 |
gh_patches_debug_3606 | rasdani/github-patches | git_diff | OBOFoundry__OBOFoundry.github.io-802 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
travis on master failing, due to metadata violations from new jsonschema checks
There are two things wrong:
- the validate script assumes a util/reports folder
- hp is failing; we already know that hp has a custom license and this should be reported elsewhere and is not a schema violation
</issue>
<code>
[start of util/validate-metadata.py]
1 #!/usr/bin/env python3
2
3 import ast
4 import sys
5 import json
6 import jsonschema
7 import re
8
9 # file paths
10 data_file = "../registry/ontologies.jsonld"
11 schema_file = "metadata-schema.json"
12 schema_lite_file = "metadata-schema-lite.json"
13 report_file = "reports/metadata-violations.csv"
14
15 # ultra-escaped regex strings
16 email_sub = 'does not match \'\\^\\[\\^@\\]\\+\\$\''
17 fmt_sub = ('does not match \'\\^\\[0\\-9A\\-Za\\-z\\-_\\\\\\\\/\\]\\+'
18 '\\\\\\\\.\\(owl\\|obo\\|json\\|omn\\|ofn\\|owx\\|ttl\\|owl'
19 '\\\\\\\\.gz\\)\\$\'')
20
21 def validate():
22 """
23 Validate registry metadata.
24 """
25 print("--- validating metadata against {0} ---".format(schema_file))
26 data = load_data()
27 schema = load_schema()
28 # validate each object
29 errors = {}
30 for item in data["ontologies"]:
31 if 'is_obsolete' in item and item["is_obsolete"] is True:
32 continue
33 # skip any 'validate: false' ontologies
34 if 'validate' in item and item["validate"] is False:
35 continue
36 ont_id = item["id"]
37 try:
38 jsonschema.validate(item, schema)
39 except jsonschema.exceptions.ValidationError as ve:
40 print("ERROR in {0}".format(ont_id))
41 errors[ont_id] = format_msg(ve)
42 if errors:
43 write_errors(errors)
44 else:
45 print("SUCCESS - no errors found in metadata")
46 sys.exit(0)
47
48 def format_msg(ve):
49 """
50 Format exception message from jsonchema.validate(...).
51 """
52 # replace u characters
53 replace_u = re.sub('u\'', '\'', ve.message)
54 # replace scary regex strings
55 replace_email = re.sub(
56 email_sub, 'is not valid for \'contact.label\'', replace_u)
57 msg = re.sub(fmt_sub, 'is not valid for \'products.id\'', replace_email)
58
59 # check if output is for license error
60 is_license = re.search('({\'url\'.+?\'label\'.+?})', msg)
61 if is_license:
62 return format_license_msg(is_license.group(1))
63
64 # check if output is for list error
65 is_list = re.search('(\\[.+?\\]) is not of type \'string\'', msg)
66 if is_list:
67 return format_list_msg(is_list.group(1), ve)
68
69 # otherwise return the message
70 return msg
71
72 def format_license_msg(substr):
73 """
74 Format an exception message for a license issue.
75 """
76 # process to dict
77 d = json.loads(substr.replace('\'', '"'))
78 url = d['url']
79 label = d['label']
80 return '\'{0}\' <{1}> is not valid for \'license\''.format(label, url)
81
82 def format_list_msg(substr, ve):
83 """
84 Format an exception for an unexpected list.
85 """
86 l = json.loads(substr.replace('\'', '"'))
87 # use the full message to find the violating property
88 prop_find = re.search('On instance\\[(\'.+?\')\\]', str(ve))
89 if prop_find:
90 prop = prop_find.group(1)
91 return '{0} expects one value, got {1}'.format(prop, len(l))
92 else:
93 return substr
94
95 def load_schema():
96 """
97 Load the schema to validate against.
98 """
99 # read the schema
100 with open(schema_file) as f:
101 schema = json.load(f)
102 return schema
103
104 def load_data():
105 """
106 Load the data to validate.
107 """
108 # read the JSON-LD data
109 with open(data_file) as f:
110 data = json.load(f)
111 return data
112
113 def write_errors(errors):
114 """
115 Write validation errors to a user-friendly report.
116 """
117 with open(report_file, 'w+') as f:
118 f.write("ID,ERROR\n")
119 for ont_id, msg in errors.items():
120 f.write('"' + ont_id + '","' + msg + '"\n')
121 print(
122 "VALIDATION FAILED: {0} errors - see {1} for details".format(
123 len(errors), report_file))
124 sys.exit(1)
125
126 # run the process!
127 if __name__ == '__main__':
128 validate()
129
[end of util/validate-metadata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/util/validate-metadata.py b/util/validate-metadata.py
--- a/util/validate-metadata.py
+++ b/util/validate-metadata.py
@@ -7,9 +7,9 @@
import re
# file paths
-data_file = "../registry/ontologies.jsonld"
-schema_file = "metadata-schema.json"
-schema_lite_file = "metadata-schema-lite.json"
+data_file = "registry/ontologies.jsonld"
+schema_file = "util/metadata-schema.json"
+schema_lite_file = "util/metadata-schema-lite.json"
report_file = "reports/metadata-violations.csv"
# ultra-escaped regex strings
| {"golden_diff": "diff --git a/util/validate-metadata.py b/util/validate-metadata.py\n--- a/util/validate-metadata.py\n+++ b/util/validate-metadata.py\n@@ -7,9 +7,9 @@\n import re\n \n # file paths\n-data_file = \"../registry/ontologies.jsonld\"\n-schema_file = \"metadata-schema.json\"\n-schema_lite_file = \"metadata-schema-lite.json\"\n+data_file = \"registry/ontologies.jsonld\"\n+schema_file = \"util/metadata-schema.json\"\n+schema_lite_file = \"util/metadata-schema-lite.json\"\n report_file = \"reports/metadata-violations.csv\"\n \n # ultra-escaped regex strings\n", "issue": "travis on master failing, due to metadata violations from new jsonschema checks\nThere are two things wrong:\r\n\r\n - the validate script assumes a util/reports folder\r\n - hp is failing; we already know that hp has a custom license and this should be reported elsewhere and is not a schema violation\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport ast\nimport sys\nimport json\nimport jsonschema\nimport re\n\n# file paths\ndata_file = \"../registry/ontologies.jsonld\"\nschema_file = \"metadata-schema.json\"\nschema_lite_file = \"metadata-schema-lite.json\"\nreport_file = \"reports/metadata-violations.csv\"\n\n# ultra-escaped regex strings\nemail_sub = 'does not match \\'\\\\^\\\\[\\\\^@\\\\]\\\\+\\\\$\\''\nfmt_sub = ('does not match \\'\\\\^\\\\[0\\\\-9A\\\\-Za\\\\-z\\\\-_\\\\\\\\\\\\\\\\/\\\\]\\\\+'\n '\\\\\\\\\\\\\\\\.\\\\(owl\\\\|obo\\\\|json\\\\|omn\\\\|ofn\\\\|owx\\\\|ttl\\\\|owl'\n '\\\\\\\\\\\\\\\\.gz\\\\)\\\\$\\'')\n\ndef validate():\n\t\"\"\"\n\tValidate registry metadata.\n\t\"\"\"\n\tprint(\"--- validating metadata against {0} ---\".format(schema_file))\n\tdata = load_data()\n\tschema = load_schema()\n\t# validate each object\n\terrors = {}\n\tfor item in data[\"ontologies\"]:\n\t\tif 'is_obsolete' in item and item[\"is_obsolete\"] is True:\n\t\t\tcontinue\n\t\t# skip any 'validate: false' ontologies\n\t\tif 'validate' in item and item[\"validate\"] is False:\n\t\t\tcontinue\n\t\tont_id = item[\"id\"]\n\t\ttry:\n\t\t\tjsonschema.validate(item, schema)\n\t\texcept jsonschema.exceptions.ValidationError as ve:\n\t\t\tprint(\"ERROR in {0}\".format(ont_id))\n\t\t\terrors[ont_id] = format_msg(ve)\n\tif errors:\n\t\twrite_errors(errors)\n\telse:\n\t\tprint(\"SUCCESS - no errors found in metadata\")\n\t\tsys.exit(0)\n\ndef format_msg(ve):\n\t\"\"\"\n\tFormat exception message from jsonchema.validate(...).\n\t\"\"\"\n\t# replace u characters\n\treplace_u = re.sub('u\\'', '\\'', ve.message)\n\t# replace scary regex strings\n\treplace_email = re.sub(\n\t\temail_sub, 'is not valid for \\'contact.label\\'', replace_u)\n\tmsg = re.sub(fmt_sub, 'is not valid for \\'products.id\\'', replace_email)\n\n\t# check if output is for license error\n\tis_license = re.search('({\\'url\\'.+?\\'label\\'.+?})', msg)\n\tif is_license:\n\t\treturn format_license_msg(is_license.group(1))\n\n\t# check if output is for list error\n\tis_list = re.search('(\\\\[.+?\\\\]) is not of type \\'string\\'', msg)\n\tif is_list:\n\t\treturn format_list_msg(is_list.group(1), ve)\n\n\t# otherwise return the message\n\treturn msg\n\ndef format_license_msg(substr):\n\t\"\"\"\n\tFormat an exception message for a license issue.\n\t\"\"\"\n\t# process to dict\n\td = json.loads(substr.replace('\\'', '\"'))\n\turl = d['url']\n\tlabel = d['label']\n\treturn '\\'{0}\\' <{1}> is not valid for \\'license\\''.format(label, url)\n\ndef format_list_msg(substr, ve):\n\t\"\"\"\n\tFormat an exception for an unexpected list.\n\t\"\"\"\n\tl = json.loads(substr.replace('\\'', '\"'))\n\t# use the full message to find the violating property\n\tprop_find = re.search('On instance\\\\[(\\'.+?\\')\\\\]', str(ve))\n\tif prop_find:\n\t\tprop = prop_find.group(1)\n\t\treturn '{0} expects one value, got {1}'.format(prop, len(l))\n\telse:\n\t\treturn substr\n\ndef load_schema():\n\t\"\"\"\n\tLoad the schema to validate against.\n\t\"\"\"\n\t# read the schema\n\twith open(schema_file) as f:\n\t\tschema = json.load(f)\n\treturn schema\n\ndef load_data():\n\t\"\"\"\n\tLoad the data to validate.\n\t\"\"\"\n\t# read the JSON-LD data\n\twith open(data_file) as f:\n\t\tdata = json.load(f)\n\treturn data\n\ndef write_errors(errors):\n\t\"\"\"\n\tWrite validation errors to a user-friendly report.\n\t\"\"\"\n\twith open(report_file, 'w+') as f:\n\t\tf.write(\"ID,ERROR\\n\")\n\t\tfor ont_id, msg in errors.items():\n\t\t\tf.write('\"' + ont_id + '\",\"' + msg + '\"\\n')\n\tprint(\n\t\t\"VALIDATION FAILED: {0} errors - see {1} for details\".format(\n\t\t\tlen(errors), report_file))\n\tsys.exit(1)\n\n# run the process!\nif __name__ == '__main__':\n\tvalidate()\n", "path": "util/validate-metadata.py"}]} | 1,886 | 133 |
gh_patches_debug_437 | rasdani/github-patches | git_diff | pypa__setuptools-2584 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add mechanism for side-by-side comparison of setup.py and its equivalent setup.cfg
We have many documentation examples that are purely declarative and are either documented as `setup.py` or `setup.cfg`. It would be really awesome if, for each of these, we had the option to have either both versions side-by-side or, even better, in a sort of "tabbed container", like the one in the [code sample at the bottom of this example](https://leetcode.com/articles/median-of-two-sorted-arrays/).
Requirements for this:
1. Cannot *link to* any third-party javascript dependencies. Ideally we wouldn't use any at all, but if you do they must be vendored in the documentation.
2. If javascript is disabled, it has to fall back to something intelligible.
Ideally it would be implemented in pure CSS / HTML if that's at all possible.
</issue>
<code>
[start of docs/conf.py]
1 extensions = ['sphinx.ext.autodoc', 'jaraco.packaging.sphinx', 'rst.linker']
2
3 master_doc = "index"
4
5 link_files = {
6 '../CHANGES.rst': dict(
7 using=dict(
8 BB='https://bitbucket.org',
9 GH='https://github.com',
10 ),
11 replace=[
12 dict(
13 pattern=r'(Issue )?#(?P<issue>\d+)',
14 url='{package_url}/issues/{issue}',
15 ),
16 dict(
17 pattern=r'BB Pull Request ?#(?P<bb_pull_request>\d+)',
18 url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',
19 ),
20 dict(
21 pattern=r'Distribute #(?P<distribute>\d+)',
22 url='{BB}/tarek/distribute/issue/{distribute}',
23 ),
24 dict(
25 pattern=r'Buildout #(?P<buildout>\d+)',
26 url='{GH}/buildout/buildout/issues/{buildout}',
27 ),
28 dict(
29 pattern=r'Old Setuptools #(?P<old_setuptools>\d+)',
30 url='http://bugs.python.org/setuptools/issue{old_setuptools}',
31 ),
32 dict(
33 pattern=r'Jython #(?P<jython>\d+)',
34 url='http://bugs.jython.org/issue{jython}',
35 ),
36 dict(
37 pattern=r'(Python #|bpo-)(?P<python>\d+)',
38 url='http://bugs.python.org/issue{python}',
39 ),
40 dict(
41 pattern=r'Interop #(?P<interop>\d+)',
42 url='{GH}/pypa/interoperability-peps/issues/{interop}',
43 ),
44 dict(
45 pattern=r'Pip #(?P<pip>\d+)',
46 url='{GH}/pypa/pip/issues/{pip}',
47 ),
48 dict(
49 pattern=r'Packaging #(?P<packaging>\d+)',
50 url='{GH}/pypa/packaging/issues/{packaging}',
51 ),
52 dict(
53 pattern=r'[Pp]ackaging (?P<packaging_ver>\d+(\.\d+)+)',
54 url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',
55 ),
56 dict(
57 pattern=r'PEP[- ](?P<pep_number>\d+)',
58 url='https://www.python.org/dev/peps/pep-{pep_number:0>4}/',
59 ),
60 dict(
61 pattern=r'setuptools_svn #(?P<setuptools_svn>\d+)',
62 url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',
63 ),
64 dict(
65 pattern=r'pypa/distutils#(?P<distutils>\d+)',
66 url='{GH}/pypa/distutils/issues/{distutils}',
67 ),
68 dict(
69 pattern=r'^(?m)((?P<scm_version>v?\d+(\.\d+){1,2}))\n[-=]+\n',
70 with_scm='{text}\n{rev[timestamp]:%d %b %Y}\n',
71 ),
72 ],
73 ),
74 }
75
76 intersphinx_mapping = {
77 'pypa-build': ('https://pypa-build.readthedocs.io/en/latest/', None)
78 }
79
80 # Add support for linking usernames
81 github_url = 'https://github.com'
82 github_sponsors_url = f'{github_url}/sponsors'
83 extlinks = {
84 'user': (f'{github_sponsors_url}/%s', '@'), # noqa: WPS323
85 }
86 extensions += ['sphinx.ext.extlinks', 'sphinx.ext.intersphinx']
87
88 # Be strict about any broken references:
89 nitpicky = True
90
91 # Ref: https://github.com/python-attrs/attrs/pull/571/files\
92 # #diff-85987f48f1258d9ee486e3191495582dR82
93 default_role = 'any'
94
95 # Custom sidebar templates, maps document names to template names.
96 html_theme = 'alabaster'
97 templates_path = ['_templates']
98 html_sidebars = {'index': ['tidelift-sidebar.html']}
99
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -93,3 +93,6 @@
html_theme = 'alabaster'
templates_path = ['_templates']
html_sidebars = {'index': ['tidelift-sidebar.html']}
+
+# Add support for inline tabs
+extensions += ['sphinx_inline_tabs']
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -93,3 +93,6 @@\n html_theme = 'alabaster'\n templates_path = ['_templates']\n html_sidebars = {'index': ['tidelift-sidebar.html']}\n+\n+# Add support for inline tabs\n+extensions += ['sphinx_inline_tabs']\n", "issue": "Add mechanism for side-by-side comparison of setup.py and its equivalent setup.cfg\nWe have many documentation examples that are purely declarative and are either documented as `setup.py` or `setup.cfg`. It would be really awesome if, for each of these, we had the option to have either both versions side-by-side or, even better, in a sort of \"tabbed container\", like the one in the [code sample at the bottom of this example](https://leetcode.com/articles/median-of-two-sorted-arrays/).\r\n\r\nRequirements for this:\r\n\r\n1. Cannot *link to* any third-party javascript dependencies. Ideally we wouldn't use any at all, but if you do they must be vendored in the documentation.\r\n2. If javascript is disabled, it has to fall back to something intelligible.\r\n\r\nIdeally it would be implemented in pure CSS / HTML if that's at all possible.\n", "before_files": [{"content": "extensions = ['sphinx.ext.autodoc', 'jaraco.packaging.sphinx', 'rst.linker']\n\nmaster_doc = \"index\"\n\nlink_files = {\n '../CHANGES.rst': dict(\n using=dict(\n BB='https://bitbucket.org',\n GH='https://github.com',\n ),\n replace=[\n dict(\n pattern=r'(Issue )?#(?P<issue>\\d+)',\n url='{package_url}/issues/{issue}',\n ),\n dict(\n pattern=r'BB Pull Request ?#(?P<bb_pull_request>\\d+)',\n url='{BB}/pypa/setuptools/pull-request/{bb_pull_request}',\n ),\n dict(\n pattern=r'Distribute #(?P<distribute>\\d+)',\n url='{BB}/tarek/distribute/issue/{distribute}',\n ),\n dict(\n pattern=r'Buildout #(?P<buildout>\\d+)',\n url='{GH}/buildout/buildout/issues/{buildout}',\n ),\n dict(\n pattern=r'Old Setuptools #(?P<old_setuptools>\\d+)',\n url='http://bugs.python.org/setuptools/issue{old_setuptools}',\n ),\n dict(\n pattern=r'Jython #(?P<jython>\\d+)',\n url='http://bugs.jython.org/issue{jython}',\n ),\n dict(\n pattern=r'(Python #|bpo-)(?P<python>\\d+)',\n url='http://bugs.python.org/issue{python}',\n ),\n dict(\n pattern=r'Interop #(?P<interop>\\d+)',\n url='{GH}/pypa/interoperability-peps/issues/{interop}',\n ),\n dict(\n pattern=r'Pip #(?P<pip>\\d+)',\n url='{GH}/pypa/pip/issues/{pip}',\n ),\n dict(\n pattern=r'Packaging #(?P<packaging>\\d+)',\n url='{GH}/pypa/packaging/issues/{packaging}',\n ),\n dict(\n pattern=r'[Pp]ackaging (?P<packaging_ver>\\d+(\\.\\d+)+)',\n url='{GH}/pypa/packaging/blob/{packaging_ver}/CHANGELOG.rst',\n ),\n dict(\n pattern=r'PEP[- ](?P<pep_number>\\d+)',\n url='https://www.python.org/dev/peps/pep-{pep_number:0>4}/',\n ),\n dict(\n pattern=r'setuptools_svn #(?P<setuptools_svn>\\d+)',\n url='{GH}/jaraco/setuptools_svn/issues/{setuptools_svn}',\n ),\n dict(\n pattern=r'pypa/distutils#(?P<distutils>\\d+)',\n url='{GH}/pypa/distutils/issues/{distutils}',\n ),\n dict(\n pattern=r'^(?m)((?P<scm_version>v?\\d+(\\.\\d+){1,2}))\\n[-=]+\\n',\n with_scm='{text}\\n{rev[timestamp]:%d %b %Y}\\n',\n ),\n ],\n ),\n}\n\nintersphinx_mapping = {\n 'pypa-build': ('https://pypa-build.readthedocs.io/en/latest/', None)\n}\n\n# Add support for linking usernames\ngithub_url = 'https://github.com'\ngithub_sponsors_url = f'{github_url}/sponsors'\nextlinks = {\n 'user': (f'{github_sponsors_url}/%s', '@'), # noqa: WPS323\n}\nextensions += ['sphinx.ext.extlinks', 'sphinx.ext.intersphinx']\n\n# Be strict about any broken references:\nnitpicky = True\n\n# Ref: https://github.com/python-attrs/attrs/pull/571/files\\\n# #diff-85987f48f1258d9ee486e3191495582dR82\ndefault_role = 'any'\n\n# Custom sidebar templates, maps document names to template names.\nhtml_theme = 'alabaster'\ntemplates_path = ['_templates']\nhtml_sidebars = {'index': ['tidelift-sidebar.html']}\n", "path": "docs/conf.py"}]} | 1,813 | 82 |
gh_patches_debug_22113 | rasdani/github-patches | git_diff | rlworkgroup__garage-1879 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TF 2.3.0 incompatibility
TF 2.3.0 was released yesterday, and seems to be incompatible with TFP <0.11.0 and breaks imports (https://travis-ci.com/github/rlworkgroup/garage/jobs/365922927#L3061). We pin TFP to <=0.10.0 in the first place to resolve cloudpickle version mismatch (https://github.com/rlworkgroup/garage/issues/1758). Since TFP 0.11.0 still pins cloudpickle to 1.3.0 while gym needs 1.2.x, unpinning TFP won't work. So for now, TF needs to be pinned to <2.3.0
</issue>
<code>
[start of setup.py]
1 """setuptools based setup module."""
2 import os
3
4 from setuptools import find_packages, setup
5
6 GARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'
7 GYM_VERSION = '0.15.4'
8
9 # Required dependencies
10 REQUIRED = [
11 # Please keep alphabetized
12 'akro',
13 'click>=2.0',
14 'cloudpickle<1.5',
15 'cma==2.7.0',
16 'dowel==0.0.3',
17 f'gym[atari,box2d,classic_control]=={GYM_VERSION}',
18 'numpy>=1.14.5',
19 'psutil',
20 # Pyglet 1.4.0 introduces some api change which breaks some
21 # gym environments
22 # See: https://github.com/openai/gym/issues/1588
23 'pyglet<1.4.0,>=1.3.0',
24 'python-dateutil',
25 'ray',
26 'scikit-image',
27 'scipy',
28 'setproctitle>=1.0',
29 'tensorflow>=1.14,<2.3.0',
30 'tensorflow-probability<=0.10.0',
31 'torch>=1.0.0,!=1.5.0,<1.6.0',
32 'torchvision>=0.2.1,<0.7.0',
33 ]
34
35 # Dependencies for optional features
36 EXTRAS = {}
37
38 EXTRAS['mujoco'] = [
39 'mujoco-py<2.1,>=2.0',
40 f'gym[all]=={GYM_VERSION}',
41 ]
42
43 EXTRAS['dm_control'] = [
44 # dm_control throws an error during install about not being able to
45 # find a build dependency (absl-py). Later pip executes the `install`
46 # command again and the install succeeds because absl-py has been
47 # installed. This is stupid, but harmless.
48 'dm_control',
49 ]
50
51 EXTRAS['bullet'] = ['mpi4py', 'pybullet']
52
53 EXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))
54
55 # Development dependencies (*not* included in 'all')
56 EXTRAS['dev'] = [
57 # Please keep alphabetized
58 'flake8',
59 'flake8-docstrings>=1.5.0',
60 'flake8-import-order',
61 f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/861ae8d8c4bef80a7ed86f47f47acaa494d4ab77', # noqa: E501
62 'isort>=4.3.21,<5.0.0',
63 'pep8-naming==0.7.0',
64 'pre-commit',
65 'pycodestyle>=2.5.0',
66 'pydocstyle>=4.0.0',
67 'pylint>=2.5.3',
68 'pytest>=4.5.0', # Required for strict-markers
69 'pytest-cov',
70 'pytest-timeout',
71 'pytest-xdist',
72 'recommonmark',
73 'sphinx',
74 'sphinx-autoapi>=1.4.0',
75 'sphinx_rtd_theme',
76 'sphinxcontrib-bibtex',
77 'yapf==0.30.0',
78 ] # yapf: disable
79
80 with open('README.md') as f:
81 README = f.read()
82
83 # Get the package version dynamically
84 with open('VERSION') as v:
85 VERSION = v.read().strip()
86
87 setup(
88 name='garage',
89 version=VERSION,
90 author='Reinforcement Learning Working Group',
91 description='A toolkit for reproducible reinforcement learning research',
92 url='https://github.com/rlworkgroup/garage',
93 packages=find_packages(where='src'),
94 package_dir={'': 'src'},
95 scripts=['scripts/garage'],
96 python_requires='>=3.6',
97 install_requires=REQUIRED,
98 extras_require=EXTRAS,
99 license='MIT',
100 long_description=README,
101 long_description_content_type='text/markdown',
102 classifiers=[
103 'Development Status :: 4 - Beta',
104 'Intended Audience :: Developers',
105 'Intended Audience :: Education',
106 'Intended Audience :: Science/Research',
107 'License :: OSI Approved :: MIT License',
108 'Programming Language :: Python :: 3.6',
109 'Programming Language :: Python :: 3.7',
110 'Programming Language :: Python :: 3 :: Only',
111 'Topic :: Scientific/Engineering :: Artificial Intelligence',
112 'Topic :: Scientific/Engineering :: Mathematics',
113 'Topic :: Software Development :: Libraries',
114 ],
115 )
116
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,30 +4,26 @@
from setuptools import find_packages, setup
GARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'
-GYM_VERSION = '0.15.4'
+GYM_VERSION = '0.17.2'
# Required dependencies
REQUIRED = [
# Please keep alphabetized
'akro',
'click>=2.0',
- 'cloudpickle<1.5',
+ 'cloudpickle==1.3',
'cma==2.7.0',
'dowel==0.0.3',
f'gym[atari,box2d,classic_control]=={GYM_VERSION}',
'numpy>=1.14.5',
'psutil',
- # Pyglet 1.4.0 introduces some api change which breaks some
- # gym environments
- # See: https://github.com/openai/gym/issues/1588
- 'pyglet<1.4.0,>=1.3.0',
'python-dateutil',
'ray',
'scikit-image',
'scipy',
'setproctitle>=1.0',
- 'tensorflow>=1.14,<2.3.0',
- 'tensorflow-probability<=0.10.0',
+ 'tensorflow>=1.14',
+ 'tensorflow-probability>=0.11.0',
'torch>=1.0.0,!=1.5.0,<1.6.0',
'torchvision>=0.2.1,<0.7.0',
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,30 +4,26 @@\n from setuptools import find_packages, setup\n \n GARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'\n-GYM_VERSION = '0.15.4'\n+GYM_VERSION = '0.17.2'\n \n # Required dependencies\n REQUIRED = [\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n- 'cloudpickle<1.5',\n+ 'cloudpickle==1.3',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n f'gym[atari,box2d,classic_control]=={GYM_VERSION}',\n 'numpy>=1.14.5',\n 'psutil',\n- # Pyglet 1.4.0 introduces some api change which breaks some\n- # gym environments\n- # See: https://github.com/openai/gym/issues/1588\n- 'pyglet<1.4.0,>=1.3.0',\n 'python-dateutil',\n 'ray',\n 'scikit-image',\n 'scipy',\n 'setproctitle>=1.0',\n- 'tensorflow>=1.14,<2.3.0',\n- 'tensorflow-probability<=0.10.0',\n+ 'tensorflow>=1.14',\n+ 'tensorflow-probability>=0.11.0',\n 'torch>=1.0.0,!=1.5.0,<1.6.0',\n 'torchvision>=0.2.1,<0.7.0',\n ]\n", "issue": "TF 2.3.0 incompatibility\nTF 2.3.0 was released yesterday, and seems to be incompatible with TFP <0.11.0 and breaks imports (https://travis-ci.com/github/rlworkgroup/garage/jobs/365922927#L3061). We pin TFP to <=0.10.0 in the first place to resolve cloudpickle version mismatch (https://github.com/rlworkgroup/garage/issues/1758). Since TFP 0.11.0 still pins cloudpickle to 1.3.0 while gym needs 1.2.x, unpinning TFP won't work. So for now, TF needs to be pinned to <2.3.0\n", "before_files": [{"content": "\"\"\"setuptools based setup module.\"\"\"\nimport os\n\nfrom setuptools import find_packages, setup\n\nGARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'\nGYM_VERSION = '0.15.4'\n\n# Required dependencies\nREQUIRED = [\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n 'cloudpickle<1.5',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n f'gym[atari,box2d,classic_control]=={GYM_VERSION}',\n 'numpy>=1.14.5',\n 'psutil',\n # Pyglet 1.4.0 introduces some api change which breaks some\n # gym environments\n # See: https://github.com/openai/gym/issues/1588\n 'pyglet<1.4.0,>=1.3.0',\n 'python-dateutil',\n 'ray',\n 'scikit-image',\n 'scipy',\n 'setproctitle>=1.0',\n 'tensorflow>=1.14,<2.3.0',\n 'tensorflow-probability<=0.10.0',\n 'torch>=1.0.0,!=1.5.0,<1.6.0',\n 'torchvision>=0.2.1,<0.7.0',\n]\n\n# Dependencies for optional features\nEXTRAS = {}\n\nEXTRAS['mujoco'] = [\n 'mujoco-py<2.1,>=2.0',\n f'gym[all]=={GYM_VERSION}',\n]\n\nEXTRAS['dm_control'] = [\n # dm_control throws an error during install about not being able to\n # find a build dependency (absl-py). Later pip executes the `install`\n # command again and the install succeeds because absl-py has been\n # installed. This is stupid, but harmless.\n 'dm_control',\n]\n\nEXTRAS['bullet'] = ['mpi4py', 'pybullet']\n\nEXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))\n\n# Development dependencies (*not* included in 'all')\nEXTRAS['dev'] = [\n # Please keep alphabetized\n 'flake8',\n 'flake8-docstrings>=1.5.0',\n 'flake8-import-order',\n f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/861ae8d8c4bef80a7ed86f47f47acaa494d4ab77', # noqa: E501\n 'isort>=4.3.21,<5.0.0',\n 'pep8-naming==0.7.0',\n 'pre-commit',\n 'pycodestyle>=2.5.0',\n 'pydocstyle>=4.0.0',\n 'pylint>=2.5.3',\n 'pytest>=4.5.0', # Required for strict-markers\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-xdist',\n 'recommonmark',\n 'sphinx',\n 'sphinx-autoapi>=1.4.0',\n 'sphinx_rtd_theme',\n 'sphinxcontrib-bibtex',\n 'yapf==0.30.0',\n] # yapf: disable\n\nwith open('README.md') as f:\n README = f.read()\n\n# Get the package version dynamically\nwith open('VERSION') as v:\n VERSION = v.read().strip()\n\nsetup(\n name='garage',\n version=VERSION,\n author='Reinforcement Learning Working Group',\n description='A toolkit for reproducible reinforcement learning research',\n url='https://github.com/rlworkgroup/garage',\n packages=find_packages(where='src'),\n package_dir={'': 'src'},\n scripts=['scripts/garage'],\n python_requires='>=3.6',\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n license='MIT',\n long_description=README,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]} | 1,990 | 387 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.